jaredleekatzman / DeepSurv

DeepSurv is a deep learning approach to survival analysis.
MIT License
590 stars 174 forks source link

Run py.test problem #10

Open caicai2526 opened 7 years ago

caicai2526 commented 7 years ago

I following readme.txt file,when i run py.test ,i meet some problem.How deal with it?There, I copy these problems.

ccf@ccf-Lenovo-Product:~/CCF/DeepSurv-master$ py.test ============================= test session starts ============================== platform linux2 -- Python 2.7.6, pytest-3.1.2, py-1.4.34, pluggy-0.4.0 rootdir: /home/ccf/CCF/DeepSurv-master, inifile: collected 8 items

tests/test_deepsurv.py .F.FEEEE

==================================== ERRORS ==================================== ____ ERROR at setup of TestDeepSurvTrain.test_train ____

self = <class test_deepsurv.TestDeepSurvTrain at 0x7f9bffc83120>

@classmethod
def setup_class(self):
    self.train, self.valid, self.test = generate_data(treatment_group=True)

    hyperparams = {
        'n_in': 10,
        'learning_rate': 1e-5,
        'hidden_layers_sizes': [10]
    }
    network = DeepSurv(**hyperparams)
    log = network.train(self.train, self.valid,
      n_epochs=10,validation_frequency=1)

tests/test_deepsurv.py:63:


deepsurv/deep_surv.py:397: in train update_fn = update_fn, kwargs deepsurv/deep_surv.py:243: in _get_train_valid_fn learning_rate=learning_rate, kwargs


self = <deepsurv.deep_surv.DeepSurv instance at 0x7f9bfa3d5b90>, L1_reg = 0.0 L2_reg = 0.0, update_fn = <function nesterov_momentum at 0x7f9c04ebfaa0> max_norm = None, deterministic = False kwargs = {'learning_rate': <TensorType(float32, scalar)>, 'momentum': array(0.0, dtype=float32)} loss = Elemwise{add,no_inplace}.0 updates = OrderedDict([(W, Elemwise{add,no_inplace}.0), (b, Elemwise{add,no_inplace}.0),...b,no_inplace}.0), (<TensorType(float64, vector)>, Elemwise{sub,no_inplace}.0)])

def _get_loss_updates(self,
L1_reg = 0.0, L2_reg = 0.001,
update_fn = lasagne.updates.nesterov_momentum,
max_norm = None, deterministic = False,
**kwargs):
    """
        Returns Theano expressions for the network's loss function and parameter
            updates.

        Parameters:
            L1_reg: float for L1 weight regularization coefficient.
            L2_reg: float for L2 weight regularization coefficient.
            max_norm: If not None, constraints the norm of gradients to be less
                than max_norm.
            deterministic: True or False. Determines if the output of the network
                is calculated determinsitically.
            update_fn: lasagne update function.
                Default: Stochastic Gradient Descent with Nesterov momentum
            **kwargs: additional parameters to provide to update_fn.
                For example: momentum

        Returns:
            loss: Theano expression for a penalized negative log likelihood.
            updates: Theano expression to update the parameters using update_fn.
        """

    loss = (
        self._negative_log_likelihood(self.E, deterministic)
        + regularize_layer_params(self.network,l1) * L1_reg
        + regularize_layer_params(self.network, l2) * L2_reg
    )

    if max_norm:
        grads = T.grad(loss,self.params)
        scaled_grads = lasagne.updates.total_norm_constraint(grads, max_norm)
        updates = update_fn(
            grads, self.params, **kwargs
        )
        return loss, updates

    updates = update_fn(
            loss, self.params, **kwargs
        )

    # If the model was loaded from file, reload params
  if self.restored_update_params:

E AttributeError: DeepSurv instance has no attribute 'restored_update_params'

deepsurv/deep_surv.py:209: AttributeError ---------------------------- Captured stdout setup ----------------------------- [ 1. 0. 1. ..., 1. 0. 0.] [ 1. 1. 1. ..., 1. 1. 1.] [ 0. 1. 1. ..., 1. 0. 1.] ____ ERROR at setup of TestDeepSurvTrain.test_network_predictrisk ____

self = <class test_deepsurv.TestDeepSurvTrain at 0x7f9bffc83120>

@classmethod
def setup_class(self):
    self.train, self.valid, self.test = generate_data(treatment_group=True)

    hyperparams = {
        'n_in': 10,
        'learning_rate': 1e-5,
        'hidden_layers_sizes': [10]
    }
    network = DeepSurv(**hyperparams)
    log = network.train(self.train, self.valid,
      n_epochs=10,validation_frequency=1)

tests/test_deepsurv.py:63:


deepsurv/deep_surv.py:397: in train update_fn = update_fn, kwargs deepsurv/deep_surv.py:243: in _get_train_valid_fn learning_rate=learning_rate, kwargs


self = <deepsurv.deep_surv.DeepSurv instance at 0x7f9bfa3d5b90>, L1_reg = 0.0 L2_reg = 0.0, update_fn = <function nesterov_momentum at 0x7f9c04ebfaa0> max_norm = None, deterministic = False kwargs = {'learning_rate': <TensorType(float32, scalar)>, 'momentum': array(0.0, dtype=float32)} loss = Elemwise{add,no_inplace}.0 updates = OrderedDict([(W, Elemwise{add,no_inplace}.0), (b, Elemwise{add,no_inplace}.0),...b,no_inplace}.0), (<TensorType(float64, vector)>, Elemwise{sub,no_inplace}.0)])

def _get_loss_updates(self,
L1_reg = 0.0, L2_reg = 0.001,
update_fn = lasagne.updates.nesterov_momentum,
max_norm = None, deterministic = False,
**kwargs):
    """
        Returns Theano expressions for the network's loss function and parameter
            updates.

        Parameters:
            L1_reg: float for L1 weight regularization coefficient.
            L2_reg: float for L2 weight regularization coefficient.
            max_norm: If not None, constraints the norm of gradients to be less
                than max_norm.
            deterministic: True or False. Determines if the output of the network
                is calculated determinsitically.
            update_fn: lasagne update function.
                Default: Stochastic Gradient Descent with Nesterov momentum
            **kwargs: additional parameters to provide to update_fn.
                For example: momentum

        Returns:
            loss: Theano expression for a penalized negative log likelihood.
            updates: Theano expression to update the parameters using update_fn.
        """

    loss = (
        self._negative_log_likelihood(self.E, deterministic)
        + regularize_layer_params(self.network,l1) * L1_reg
        + regularize_layer_params(self.network, l2) * L2_reg
    )

    if max_norm:
        grads = T.grad(loss,self.params)
        scaled_grads = lasagne.updates.total_norm_constraint(grads, max_norm)
        updates = update_fn(
            grads, self.params, **kwargs
        )
        return loss, updates

    updates = update_fn(
            loss, self.params, **kwargs
        )

    # If the model was loaded from file, reload params
  if self.restored_update_params:

E AttributeError: DeepSurv instance has no attribute 'restored_update_params'

deepsurv/deep_surv.py:209: AttributeError ____ ERROR at setup of TestDeepSurvTrain.test_get_concordance_index ____

self = <class test_deepsurv.TestDeepSurvTrain at 0x7f9bffc83120>

@classmethod
def setup_class(self):
    self.train, self.valid, self.test = generate_data(treatment_group=True)

    hyperparams = {
        'n_in': 10,
        'learning_rate': 1e-5,
        'hidden_layers_sizes': [10]
    }
    network = DeepSurv(**hyperparams)
    log = network.train(self.train, self.valid,
      n_epochs=10,validation_frequency=1)

tests/test_deepsurv.py:63:


deepsurv/deep_surv.py:397: in train update_fn = update_fn, kwargs deepsurv/deep_surv.py:243: in _get_train_valid_fn learning_rate=learning_rate, kwargs


self = <deepsurv.deep_surv.DeepSurv instance at 0x7f9bfa3d5b90>, L1_reg = 0.0 L2_reg = 0.0, update_fn = <function nesterov_momentum at 0x7f9c04ebfaa0> max_norm = None, deterministic = False kwargs = {'learning_rate': <TensorType(float32, scalar)>, 'momentum': array(0.0, dtype=float32)} loss = Elemwise{add,no_inplace}.0 updates = OrderedDict([(W, Elemwise{add,no_inplace}.0), (b, Elemwise{add,no_inplace}.0),...b,no_inplace}.0), (<TensorType(float64, vector)>, Elemwise{sub,no_inplace}.0)])

def _get_loss_updates(self,
L1_reg = 0.0, L2_reg = 0.001,
update_fn = lasagne.updates.nesterov_momentum,
max_norm = None, deterministic = False,
**kwargs):
    """
        Returns Theano expressions for the network's loss function and parameter
            updates.

        Parameters:
            L1_reg: float for L1 weight regularization coefficient.
            L2_reg: float for L2 weight regularization coefficient.
            max_norm: If not None, constraints the norm of gradients to be less
                than max_norm.
            deterministic: True or False. Determines if the output of the network
                is calculated determinsitically.
            update_fn: lasagne update function.
                Default: Stochastic Gradient Descent with Nesterov momentum
            **kwargs: additional parameters to provide to update_fn.
                For example: momentum

        Returns:
            loss: Theano expression for a penalized negative log likelihood.
            updates: Theano expression to update the parameters using update_fn.
        """

    loss = (
        self._negative_log_likelihood(self.E, deterministic)
        + regularize_layer_params(self.network,l1) * L1_reg
        + regularize_layer_params(self.network, l2) * L2_reg
    )

    if max_norm:
        grads = T.grad(loss,self.params)
        scaled_grads = lasagne.updates.total_norm_constraint(grads, max_norm)
        updates = update_fn(
            grads, self.params, **kwargs
        )
        return loss, updates

    updates = update_fn(
            loss, self.params, **kwargs
        )

    # If the model was loaded from file, reload params
  if self.restored_update_params:

E AttributeError: DeepSurv instance has no attribute 'restored_update_params'

deepsurv/deepsurv.py:209: AttributeError ____ ERROR at setup of TestDeepSurvTrain.test_recommendtreatment ____

self = <class test_deepsurv.TestDeepSurvTrain at 0x7f9bffc83120>

@classmethod
def setup_class(self):
    self.train, self.valid, self.test = generate_data(treatment_group=True)

    hyperparams = {
        'n_in': 10,
        'learning_rate': 1e-5,
        'hidden_layers_sizes': [10]
    }
    network = DeepSurv(**hyperparams)
    log = network.train(self.train, self.valid,
      n_epochs=10,validation_frequency=1)

tests/test_deepsurv.py:63:


deepsurv/deep_surv.py:397: in train update_fn = update_fn, kwargs deepsurv/deep_surv.py:243: in _get_train_valid_fn learning_rate=learning_rate, kwargs


self = <deepsurv.deep_surv.DeepSurv instance at 0x7f9bfa3d5b90>, L1_reg = 0.0 L2_reg = 0.0, update_fn = <function nesterov_momentum at 0x7f9c04ebfaa0> max_norm = None, deterministic = False kwargs = {'learning_rate': <TensorType(float32, scalar)>, 'momentum': array(0.0, dtype=float32)} loss = Elemwise{add,no_inplace}.0 updates = OrderedDict([(W, Elemwise{add,no_inplace}.0), (b, Elemwise{add,no_inplace}.0),...b,no_inplace}.0), (<TensorType(float64, vector)>, Elemwise{sub,no_inplace}.0)])

def _get_loss_updates(self,
L1_reg = 0.0, L2_reg = 0.001,
update_fn = lasagne.updates.nesterov_momentum,
max_norm = None, deterministic = False,
**kwargs):
    """
        Returns Theano expressions for the network's loss function and parameter
            updates.

        Parameters:
            L1_reg: float for L1 weight regularization coefficient.
            L2_reg: float for L2 weight regularization coefficient.
            max_norm: If not None, constraints the norm of gradients to be less
                than max_norm.
            deterministic: True or False. Determines if the output of the network
                is calculated determinsitically.
            update_fn: lasagne update function.
                Default: Stochastic Gradient Descent with Nesterov momentum
            **kwargs: additional parameters to provide to update_fn.
                For example: momentum

        Returns:
            loss: Theano expression for a penalized negative log likelihood.
            updates: Theano expression to update the parameters using update_fn.
        """

    loss = (
        self._negative_log_likelihood(self.E, deterministic)
        + regularize_layer_params(self.network,l1) * L1_reg
        + regularize_layer_params(self.network, l2) * L2_reg
    )

    if max_norm:
        grads = T.grad(loss,self.params)
        scaled_grads = lasagne.updates.total_norm_constraint(grads, max_norm)
        updates = update_fn(
            grads, self.params, **kwargs
        )
        return loss, updates

    updates = update_fn(
            loss, self.params, **kwargs
        )

    # If the model was loaded from file, reload params
  if self.restored_update_params:

E AttributeError: DeepSurv instance has no attribute 'restored_update_params'

deepsurv/deepsurv.py:209: AttributeError =================================== FAILURES =================================== ____ TestDeepSurvInit.test_deepsurv_initialize_batchnorm ____

self = <test_deepsurv.TestDeepSurvInit instance at 0x7f9bfa41f488>

def test_deepsurv_initialize_batch_norm(self):
  network = DeepSurv(batch_norm = True, **self.hyperparams)

tests/test_deepsurv.py:39:


self = <deepsurv.deep_surv.DeepSurv instance at 0x7f9bfa41fc68>, n_in = 10 learning_rate = 1e-05, hidden_layers_sizes = [10, 10], lr_decay = 0.0 momentum = 0.9, L2_reg = 0.0, L1_reg = 0.0, activation = 'rectify' dropout = None, batch_norm = True, standardize = False

def __init__(self, n_in,
learning_rate, hidden_layers_sizes = None,
lr_decay = 0.0, momentum = 0.9,
L2_reg = 0.0, L1_reg = 0.0,
activation = "rectify",
dropout = None,
batch_norm = False,
standardize = False,
):
    """
        This class implements and trains a DeepSurv model.

        Parameters:
            n_in: number of input nodes.
            learning_rate: learning rate for training.
            lr_decay: coefficient for Power learning rate decay.
            L2_reg: coefficient for L2 weight decay regularization. Used to help
                prevent the model from overfitting.
            L1_reg: coefficient for L1 weight decay regularization
            momentum: coefficient for momentum. Can be 0 or None to disable.
            hidden_layer_sizes: a list of integers to determine the size of
                each hidden layer.
            activation: a lasagne activation class.
                Default: lasagne.nonlinearities.rectify
            batch_norm: True or False. Include batch normalization layers.
            dropout: if not None or 0, the percentage of dropout to include
                after each hidden layer. Default: None
            standardize: True or False. Include standardization layer after
                input layer.
        """

    self.X = T.fmatrix('x')  # patients covariates
    self.E = T.ivector('e') # the observations vector

    # Default Standardization Values: mean = 0, std = 1
    self.offset = theano.shared(numpy.zeros(shape = n_in, dtype=numpy.float32))
    self.scale = theano.shared(numpy.ones(shape = n_in, dtype=numpy.float32))

    network = lasagne.layers.InputLayer(shape=(None,n_in),
        input_var = self.X)

    if standardize:
        network = lasagne.layers.standardize(network,self.offset,
                                            self.scale,
                                            shared_axes = 0)
    self.standardize = standardize

    if activation == 'rectify':
        activation_fn = lasagne.nonlinearities.rectify
    else:
        raise IllegalArgumentException("Unknown activation function: %s" % activation)

    # Construct Neural Network
    for n_layer in (hidden_layers_sizes or []):
        if activation_fn == lasagne.nonlinearities.rectify:
            W_init = lasagne.init.GlorotUniform()
        else:
            # TODO: implement other initializations
            W_init = lasagne.init.GlorotUniform()

        network = lasagne.layers.DenseLayer(
            network, num_units = n_layer,
            nonlinearity = activation_fn,
            W = W_init
        )

        if batch_norm:
          network = lasagne.layers.po(network)

E AttributeError: 'module' object has no attribute 'po'

deepsurv/deepsurv.py:86: AttributeError ____ TestDeepSurvInit.test_deepsurv_initialize_standardize_layer __

self = <test_deepsurv.TestDeepSurvInit instance at 0x7f9bfa37c9e0>

def test_deepsurv_initialize_standardize_layer(self):
  network = DeepSurv(standardize = True, **self.hyperparams)

tests/test_deepsurv.py:47:


self = <deepsurv.deep_surv.DeepSurv instance at 0x7f9bfa3f7b00>, n_in = 10 learning_rate = 1e-05, hidden_layers_sizes = [10, 10], lr_decay = 0.0 momentum = 0.9, L2_reg = 0.0, L1_reg = 0.0, activation = 'rectify' dropout = None, batch_norm = False, standardize = True

def __init__(self, n_in,
learning_rate, hidden_layers_sizes = None,
lr_decay = 0.0, momentum = 0.9,
L2_reg = 0.0, L1_reg = 0.0,
activation = "rectify",
dropout = None,
batch_norm = False,
standardize = False,
):
    """
        This class implements and trains a DeepSurv model.

        Parameters:
            n_in: number of input nodes.
            learning_rate: learning rate for training.
            lr_decay: coefficient for Power learning rate decay.
            L2_reg: coefficient for L2 weight decay regularization. Used to help
                prevent the model from overfitting.
            L1_reg: coefficient for L1 weight decay regularization
            momentum: coefficient for momentum. Can be 0 or None to disable.
            hidden_layer_sizes: a list of integers to determine the size of
                each hidden layer.
            activation: a lasagne activation class.
                Default: lasagne.nonlinearities.rectify
            batch_norm: True or False. Include batch normalization layers.
            dropout: if not None or 0, the percentage of dropout to include
                after each hidden layer. Default: None
            standardize: True or False. Include standardization layer after
                input layer.
        """

    self.X = T.fmatrix('x')  # patients covariates
    self.E = T.ivector('e') # the observations vector

    # Default Standardization Values: mean = 0, std = 1
    self.offset = theano.shared(numpy.zeros(shape = n_in, dtype=numpy.float32))
    self.scale = theano.shared(numpy.ones(shape = n_in, dtype=numpy.float32))

    network = lasagne.layers.InputLayer(shape=(None,n_in),
        input_var = self.X)

    if standardize:
      network = lasagne.layers.standardize(network,self.offset,

self.scale, shared_axes = 0) E AttributeError: 'module' object has no attribute 'standardize'

deepsurv/deep_surv.py:60: AttributeError ================= 2 failed, 2 passed, 4 error in 5.42 seconds ==================

jaredleekatzman commented 7 years ago

What version of Lasagne do you have currently installed?

On Jun 19, 2017, at 2:28 AM, caicai2526 notifications@github.com wrote:

I following readme.txt file,when i run py.test ,i meet some problem.How deal with it?There, I copy these problems.

self = <test_deepsurv.TestDeepSurvInit instance at 0x7f7fd9b1c440>

def test_deepsurv_initialize_batch_norm(self): network = DeepSurv(batch_norm = True, **self.hyperparams) tests/test_deepsurv.py:39:

self = <deepsurv.deep_surv.DeepSurv instance at 0x7f7fd9b1cc20>, n_in = 10 learning_rate = 1e-05, hidden_layers_sizes = [10, 10], lr_decay = 0.0 momentum = 0.9, L2_reg = 0.0, L1_reg = 0.0 activation = <function rectify at 0x7f7fe11b0848>, dropout = None batch_norm = True, standardize = False

def init(self, n_in, learning_rate, hidden_layers_sizes = None, lr_decay = 0.0, momentum = 0.9, L2_reg = 0.0, L1_reg = 0.0, activation = lasagne.nonlinearities.rectify, dropout = None, batch_norm = False, standardize = False, ): """ This class implements and trains a DeepSurv model.

    Parameters:
        n_in: number of input nodes.
        learning_rate: learning rate for training.
        lr_decay: coefficient for Power learning rate decay.
        L2_reg: coefficient for L2 weight decay regularization. Used to help
            prevent the model from overfitting.
        L1_reg: coefficient for L1 weight decay regularization
        momentum: coefficient for momentum. Can be 0 or None to disable.
        hidden_layer_sizes: a list of integers to determine the size of
            each hidden layer.
        activation: a lasagne activation class.
            Default: lasagne.nonlinearities.rectify
        batch_norm: True or False. Include batch normalization layers.
        dropout: if not None or 0, the percentage of dropout to include
            after each hidden layer. Default: None
        standardize: True or False. Include standardization layer after
            input layer.
    """

self.X = T.fmatrix('x')  # patients covariates
self.E = T.ivector('e') # the observations vector

# Default Standardization Values: mean = 0, std = 1
self.offset = theano.shared(numpy.zeros(shape = n_in, dtype=numpy.float32))
self.scale = theano.shared(numpy.ones(shape = n_in, dtype=numpy.float32))

network = lasagne.layers.InputLayer(shape=(None,n_in),
    input_var = self.X)

if standardize:
    network = lasagne.layers.standardize(network,self.offset,
                                        self.scale,
                                        shared_axes = 0)
self.standardize = standardize

# Construct Neural Network
for n_layer in (hidden_layers_sizes or []):
    if activation == lasagne.nonlinearities.rectify:
        W_init = lasagne.init.GlorotUniform()
    else:
        # TODO: implement other initializations
        W_init = lasagne.init.GlorotUniform()

    network = lasagne.layers.DenseLayer(
        network, num_units = n_layer,
        nonlinearity = activation,
        W = W_init
    )

    if batch_norm:
      network = lasagne.layers.batch_norm(network)

E AttributeError: 'module' object has no attribute 'batch_norm'

/usr/local/lib/python2.7/dist-packages/deepsurv/deepsurv.py:76: AttributeError ____ TestDeepSurvInit.test_deepsurv_initialize_standardize_layer __

self = <test_deepsurv.TestDeepSurvInit instance at 0x7f7fd9ad3908>

def test_deepsurv_initialize_standardize_layer(self): network = DeepSurv(standardize = True, **self.hyperparams) tests/test_deepsurv.py:47:

self = <deepsurv.deep_surv.DeepSurv instance at 0x7f7fd9ad3dd0>, n_in = 10 learning_rate = 1e-05, hidden_layers_sizes = [10, 10], lr_decay = 0.0 momentum = 0.9, L2_reg = 0.0, L1_reg = 0.0 activation = <function rectify at 0x7f7fe11b0848>, dropout = None batch_norm = False, standardize = True

def init(self, n_in, learning_rate, hidden_layers_sizes = None, lr_decay = 0.0, momentum = 0.9, L2_reg = 0.0, L1_reg = 0.0, activation = lasagne.nonlinearities.rectify, dropout = None, batch_norm = False, standardize = False, ): """ This class implements and trains a DeepSurv model.

    Parameters:
        n_in: number of input nodes.
        learning_rate: learning rate for training.
        lr_decay: coefficient for Power learning rate decay.
        L2_reg: coefficient for L2 weight decay regularization. Used to help
            prevent the model from overfitting.
        L1_reg: coefficient for L1 weight decay regularization
        momentum: coefficient for momentum. Can be 0 or None to disable.
        hidden_layer_sizes: a list of integers to determine the size of
            each hidden layer.
        activation: a lasagne activation class.
            Default: lasagne.nonlinearities.rectify
        batch_norm: True or False. Include batch normalization layers.
        dropout: if not None or 0, the percentage of dropout to include
            after each hidden layer. Default: None
        standardize: True or False. Include standardization layer after
            input layer.
    """

self.X = T.fmatrix('x')  # patients covariates
self.E = T.ivector('e') # the observations vector

# Default Standardization Values: mean = 0, std = 1
self.offset = theano.shared(numpy.zeros(shape = n_in, dtype=numpy.float32))
self.scale = theano.shared(numpy.ones(shape = n_in, dtype=numpy.float32))

network = lasagne.layers.InputLayer(shape=(None,n_in),
    input_var = self.X)

if standardize:
  network = lasagne.layers.standardize(network,self.offset,

E AttributeError: 'module' object has no attribute 'standardize'

/usr/local/lib/python2.7/dist-packages/deepsurv/deep_surv.py:55: AttributeError ===================== 2 failed, 6 passed in 23.55 seconds ====================== ccf@ccf-Lenovo-Product:~/CCF/DeepSurv-master$ python Python 2.7.6 (default, Oct 26 2016, 20:30:19) [GCC 4.8.4] on linux2 Type "help", "copyright", "credits" or "license" for more information.

import lasagne /usr/local/lib/python2.7/dist-packages/theano/tensor/signal/downsample.py:6: UserWarning: downsample module has been moved to the theano.tensor.signal.pool module. "downsample module has been moved to the theano.tensor.signal.pool module.")

[4]+ 已停止 python ccf@ccf-Lenovo-Product:/CCF/DeepSurv-master$ theano-cache purge Traceback (most recent call last): File "/usr/local/bin/theano-cache", line 71, in theano.gof.compiledir.compiledir_purge() File "/usr/local/lib/python2.7/dist-packages/theano/gof/compiledir.py", line 177, in compiledir_purge shutil.rmtree(config.compiledir) File "/usr/lib/python2.7/shutil.py", line 247, in rmtree rmtree(fullname, ignore_errors, onerror) File "/usr/lib/python2.7/shutil.py", line 239, in rmtree onerror(os.listdir, path, sys.exc_info()) File "/usr/lib/python2.7/shutil.py", line 237, in rmtree names = os.listdir(path) OSError: [Errno 13] Permission denied: '/home/ccf/.theano/compiledir_Linux-3.13--generic-x86_64-with-Ubuntu-14.04-trusty-x86_64-2.7.6-64/tmpoVtKw2' ccf@ccf-Lenovo-Product:/CCF/DeepSurv-master$ sudo theano-cache purge ccf@ccf-Lenovo-Product:~/CCF/DeepSurv-master$ sudo py.test ============================= test session starts ============================== platform linux2 -- Python 2.7.6, pytest-3.1.2, py-1.4.34, pluggy-0.4.0 rootdir: /home/ccf/CCF/DeepSurv-master, inifile: collected 8 items

tests/test_deepsurv.py .F.F....

=================================== FAILURES =================================== _ TestDeepSurvInit.test_deepsurv_initialize_batchnorm

self = <test_deepsurv.TestDeepSurvInit instance at 0x7f770d891e60>

def test_deepsurv_initialize_batch_norm(self): network = DeepSurv(batch_norm = True, **self.hyperparams) tests/test_deepsurv.py:39:

self = <deepsurv.deep_surv.DeepSurv instance at 0x7f770d895710>, n_in = 10 learning_rate = 1e-05, hidden_layers_sizes = [10, 10], lr_decay = 0.0 momentum = 0.9, L2_reg = 0.0, L1_reg = 0.0 activation = <function rectify at 0x7f7714f27848>, dropout = None batch_norm = True, standardize = False

def init(self, n_in, learning_rate, hidden_layers_sizes = None, lr_decay = 0.0, momentum = 0.9, L2_reg = 0.0, L1_reg = 0.0, activation = lasagne.nonlinearities.rectify, dropout = None, batch_norm = False, standardize = False, ): """ This class implements and trains a DeepSurv model.

    Parameters:
        n_in: number of input nodes.
        learning_rate: learning rate for training.
        lr_decay: coefficient for Power learning rate decay.
        L2_reg: coefficient for L2 weight decay regularization. Used to help
            prevent the model from overfitting.
        L1_reg: coefficient for L1 weight decay regularization
        momentum: coefficient for momentum. Can be 0 or None to disable.
        hidden_layer_sizes: a list of integers to determine the size of
            each hidden layer.
        activation: a lasagne activation class.
            Default: lasagne.nonlinearities.rectify
        batch_norm: True or False. Include batch normalization layers.
        dropout: if not None or 0, the percentage of dropout to include
            after each hidden layer. Default: None
        standardize: True or False. Include standardization layer after
            input layer.
    """

self.X = T.fmatrix('x')  # patients covariates
self.E = T.ivector('e') # the observations vector

# Default Standardization Values: mean = 0, std = 1
self.offset = theano.shared(numpy.zeros(shape = n_in, dtype=numpy.float32))
self.scale = theano.shared(numpy.ones(shape = n_in, dtype=numpy.float32))

network = lasagne.layers.InputLayer(shape=(None,n_in),
    input_var = self.X)

if standardize:
    network = lasagne.layers.standardize(network,self.offset,
                                        self.scale,
                                        shared_axes = 0)
self.standardize = standardize

# Construct Neural Network
for n_layer in (hidden_layers_sizes or []):
    if activation == lasagne.nonlinearities.rectify:
        W_init = lasagne.init.GlorotUniform()
    else:
        # TODO: implement other initializations
        W_init = lasagne.init.GlorotUniform()

    network = lasagne.layers.DenseLayer(
        network, num_units = n_layer,
        nonlinearity = activation,
        W = W_init
    )

    if batch_norm:
      network = lasagne.layers.batch_norm(network)

E AttributeError: 'module' object has no attribute 'batch_norm'

/usr/local/lib/python2.7/dist-packages/deepsurv/deepsurv.py:76: AttributeError ____ TestDeepSurvInit.test_deepsurv_initialize_standardize_layer __

self = <test_deepsurv.TestDeepSurvInit instance at 0x7f770d84bcf8>

def test_deepsurv_initialize_standardize_layer(self): network = DeepSurv(standardize = True, **self.hyperparams) tests/test_deepsurv.py:47:

self = <deepsurv.deep_surv.DeepSurv instance at 0x7f770d84b908>, n_in = 10 learning_rate = 1e-05, hidden_layers_sizes = [10, 10], lr_decay = 0.0 momentum = 0.9, L2_reg = 0.0, L1_reg = 0.0 activation = <function rectify at 0x7f7714f27848>, dropout = None batch_norm = False, standardize = True

def init(self, n_in, learning_rate, hidden_layers_sizes = None, lr_decay = 0.0, momentum = 0.9, L2_reg = 0.0, L1_reg = 0.0, activation = lasagne.nonlinearities.rectify, dropout = None, batch_norm = False, standardize = False, ): """ This class implements and trains a DeepSurv model.

    Parameters:
        n_in: number of input nodes.
        learning_rate: learning rate for training.
        lr_decay: coefficient for Power learning rate decay.
        L2_reg: coefficient for L2 weight decay regularization. Used to help
            prevent the model from overfitting.
        L1_reg: coefficient for L1 weight decay regularization
        momentum: coefficient for momentum. Can be 0 or None to disable.
        hidden_layer_sizes: a list of integers to determine the size of
            each hidden layer.
        activation: a lasagne activation class.
            Default: lasagne.nonlinearities.rectify
        batch_norm: True or False. Include batch normalization layers.
        dropout: if not None or 0, the percentage of dropout to include
            after each hidden layer. Default: None
        standardize: True or False. Include standardization layer after
            input layer.
    """

self.X = T.fmatrix('x')  # patients covariates
self.E = T.ivector('e') # the observations vector

# Default Standardization Values: mean = 0, std = 1
self.offset = theano.shared(numpy.zeros(shape = n_in, dtype=numpy.float32))
self.scale = theano.shared(numpy.ones(shape = n_in, dtype=numpy.float32))

network = lasagne.layers.InputLayer(shape=(None,n_in),
    input_var = self.X)

if standardize:
  network = lasagne.layers.standardize(network,self.offset,

E AttributeError: 'module' object has no attribute 'standardize'

/usr/local/lib/python2.7/dist-packages/deepsurv/deep_surv.py:55: AttributeError ===================== 2 failed, 6 passed in 22.51 seconds ======================

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/jaredleekatzman/DeepSurv/issues/10, or mute the thread https://github.com/notifications/unsubscribe-auth/AFmmZoZ-1ybqrRObU99p-OFQ__eG4QS1ks5sFhT7gaJpZM4N9zYo.

caicai2526 commented 7 years ago

Thank you jaredleekatzman@jaredleekatzman .Lasagne 0.1 was installed ,when i uninstalled lasagne 0.1 and installed lasagne==0.2 dev1 ,solving these problem.But i must use "sudo py.test"to replace "py.test",if not,it will appear" AttributeError: DeepSurv instance has no attribute 'restored_update_params",Why?This is a question of competence?

jaredleekatzman commented 7 years ago

@caicai2526 this could be due to you have different versions of python packages installed for different users on your computer. Try using virtual environments and/or a clean environment