johnmarktaylor91 / torchlens

Package for extracting and mapping the results of every single tensor operation in a PyTorch model in one line of code.
GNU General Public License v3.0
477 stars 17 forks source link

AttributeError: 'Tensor' object has no attribute 'tl_tensor_label_raw' #13

Closed bhairavmehta95 closed 1 year ago

bhairavmehta95 commented 1 year ago

I'm trying to debug a text-to-speech (relevant, as these types of models are just tons of different models stitched together, meaning there's no clean, single forward())

And I'm getting this error:

Feature extraction failed; returning model and environment to normal
*************
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-43-7a5e906e87cc> in <module>
      9 t_en = model.text_encoder(tokens, input_lengths, m)
     10 
---> 11 tl.log_forward_pass(model.text_encoder, (tokens, input_lengths, m))
     12 
     13 with torch.no_grad():

/home/bhairavm/coding/deeplearning-env/lib/python3.6/site-packages/torchlens/user_funcs.py in log_forward_pass(model, input_args, input_kwargs, layers_to_save, keep_unsaved_layers, output_device, activation_postfunc, mark_input_output_distances, detach_saved_tensors, save_function_args, save_gradients, vis_opt, vis_nesting_depth, vis_outpath, vis_save_only, vis_fileformat, vis_buffer_layers, vis_direction, random_seed)
    108             save_function_args=save_function_args,
    109             save_gradients=save_gradients,
--> 110             random_seed=random_seed,
    111         )
    112     else:

/home/bhairavm/coding/deeplearning-env/lib/python3.6/site-packages/torchlens/model_history.py in run_model_and_save_specified_activations(model, input_args, input_kwargs, layers_to_save, keep_unsaved_layers, output_device, activation_postfunc, mark_input_output_distances, detach_saved_tensors, save_function_args, save_gradients, random_seed)
   6736     )
   6737     model_history.run_and_log_inputs_through_model(
-> 6738         model, input_args, input_kwargs, layers_to_save, random_seed
   6739     )
   6740     return model_history

/home/bhairavm/coding/deeplearning-env/lib/python3.6/site-packages/torchlens/model_history.py in run_and_log_inputs_through_model(self, model, input_args, input_kwargs, layers_to_save, random_seed)
   1761                 "************\nFeature extraction failed; returning model and environment to normal\n*************"
   1762             )
-> 1763             raise e
   1764 
   1765         finally:  # do garbage collection no matter what

/home/bhairavm/coding/deeplearning-env/lib/python3.6/site-packages/torchlens/model_history.py in run_and_log_inputs_through_model(self, model, input_args, input_kwargs, layers_to_save, random_seed)
   1728             self.prepare_model(model, module_orig_forward_funcs, decorated_func_mapper)
   1729             self.elapsed_time_setup = time.time() - self.pass_start_time
-> 1730             outputs = model(*input_args, **input_kwargs)
   1731             self.elapsed_time_forward_pass = (
   1732                     time.time() - self.pass_start_time - self.elapsed_time_setup

/home/bhairavm/coding/deeplearning-env/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1100         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1101                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102             return forward_call(*input, **kwargs)
   1103         # Do not call functions when jit is used
   1104         full_backward_hooks, non_full_backward_hooks = [], []

/home/bhairavm/coding/StyleTTS/models.py in forward(self, x, input_lengths, m)
    311 
    312         self.lstm.flatten_parameters()
--> 313         x, _ = self.lstm(x)
    314         x, _ = nn.utils.rnn.pad_packed_sequence(
    315             x, batch_first=True)

/home/bhairavm/coding/deeplearning-env/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1100         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1101                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102             return forward_call(*input, **kwargs)
   1103         # Do not call functions when jit is used
   1104         full_backward_hooks, non_full_backward_hooks = [], []

/home/bhairavm/coding/deeplearning-env/lib/python3.6/site-packages/torchlens/model_history.py in decorated_forward(*args, **kwargs)
   1298             input_tensor_labels = set()
   1299             for t in input_tensors:
-> 1300                 tensor_entry = self.raw_tensor_dict[t.tl_tensor_label_raw]
   1301                 input_tensor_labels.add(t.tl_tensor_label_raw)
   1302                 module.tl_tensors_entered_labels.append(t.tl_tensor_label_raw)

AttributeError: 'Tensor' object has no attribute 'tl_tensor_label_raw'

Any idea on where the issue might lie? I'm not sure if:

  1. There's some issues with how I'm loading the model from checkpoints
  2. The model needs to be in training mode
  3. Something else
johnmarktaylor91 commented 1 year ago

Is it an open source model and if so any chance you can tell me which one it is? Looks like an internal bug but I think it should be quick to fix if I could play around with it.

johnmarktaylor91 commented 1 year ago

Never mind, the model name was in the file paths in the output you gave ;) I was able to debug it, here's a picture of the TextEncoder module. I'll push a new change as soon as I've re-run all the tests.

styletts_text_encoder.pdf

Also, regarding your other points: it shouldn't be an issue with how the model is loaded from checkpoints; this would affect the weights, but this won't affect any of the programming logic of TorchLens, and TorchLens is meant to work for both training and evaluation mode. So generally speaking, if all is functioning correctly these shouldn't affect how TorchLens works.

bhairavmehta95 commented 1 year ago

TYSM!

johnmarktaylor91 commented 1 year ago

Okay, just pushed the updated version to GitHub and Pip--it should work now, let me know if it doesn't

johnmarktaylor91 commented 1 year ago

TYSM!

Did that fix it for you?

johnmarktaylor91 commented 1 year ago

It now works for me after the update, so closing this issue