Closed spaladin closed 1 year ago
Hi there! This is because the keys
for the features_per_layer
dictionary that is used to store the activations for each module in the extract_all_layers
convenience function (see below) correspond to custom names and not to the original module names. You can replace features_per_layer[f'layer_{l:02d}']
with features_per_layer[f'{module_name}']
for using original module names. See my comments in the below function.
def extract_all_layers(
model_name: str,
extractor: Any,
image_path: str,
out_path: str,
batch_size: int,
flatten_activations: bool,
apply_center_crop: bool,
layer: Any=nn.Linear,
file_format: str = "npy",
class_names: Optional[List[str]]=None,
file_names: Optional[List[str]]=None,
) -> Dict[str, Union[np.ndarray, torch.Tensor]]:
"""Extract features for all selected layers and save them to disk."""
features_per_layer = {}
for l, (module_name, module) in enumerate(extractor.model.named_modules(), start=1):
if isinstance(module, layer):
# extract features for layer "module_name"
features = extract_features(
extractor=extractor,
module_name=module_name,
image_path=image_path,
out_path=out_path,
batch_size=batch_size,
flatten_activations=flatten_activations,
apply_center_crop=apply_center_crop,
class_names=class_names,
file_names=file_names,
)
# NOTE: for original module names use features_per_layer[f'{module_name}'] = features
# replace with e.g., [f'conv_{l:02d}'] or [f'fc_{l:02d}']
features_per_layer[f'layer_{l:02d}'] = features
# save the features to disk
save_features(features, out_path=f'{out_path}/features_{model_name}_{module_name}', file_format=file_format)
return features_per_layer
Hi Lukas! Thanks for the quick response!
I guess, I am missing something, but after replacing the line as suggested, I get only one key named 'module_name' like this:
> {'module_name': array([[-5.751324 , -7.1328826, -7.124266 , ..., -5.815997 , -5.2186813,
> -1.916337 ],
> [-3.6160486, -5.257819 , -4.3458114, ..., -6.049942 , -5.9555883,
> -3.9248927],
> [-1.8869545, -8.714518 , -6.6266246, ..., 0.0618932, -6.012134 ,
> -7.594375 ],
> ...,
> [-1.5973396, -2.0364807, -2.9034288, ..., -8.830564 , -8.896644 ,
> -8.055617 ],
> [-2.4018128, -5.4838195, -5.3646264, ..., -7.173752 , -6.7403235,
> -4.4299088],
> [-6.647603 , -5.406933 , -3.6814663, ..., -3.7315552, -4.8392887,
> -4.5182486]], dtype=float32)}
Hi Lukas! Thanks for the quick response!
I guess, I am missing something, but after replacing the line as suggested, I get only one key named 'module_name' like this:
> {'module_name': array([[-5.751324 , -7.1328826, -7.124266 , ..., -5.815997 , -5.2186813, > -1.916337 ], > [-3.6160486, -5.257819 , -4.3458114, ..., -6.049942 , -5.9555883, > -3.9248927], > [-1.8869545, -8.714518 , -6.6266246, ..., 0.0618932, -6.012134 , > -7.594375 ], > ..., > [-1.5973396, -2.0364807, -2.9034288, ..., -8.830564 , -8.896644 , > -8.055617 ], > [-2.4018128, -5.4838195, -5.3646264, ..., -7.173752 , -6.7403235, > -4.4299088], > [-6.647603 , -5.406933 , -3.6814663, ..., -3.7315552, -4.8392887, > -4.5182486]], dtype=float32)}
I am sorry. It should read features_per_layer[f'{module_name}']
rather than features_per_layer[f'module_name']
. This was just a typo in my previous comment (I've corrected it).
Great, thank you, that works!
Hi!
I am using the Collab notebook for PyTorch. I tried Alexnet – by changing some parameters in the notebook examples (VGG-16 with batch norm pretrained on ImageNet) I wanted to extract activations from all the convolutional layers, so I used the following code modification:
The number of extracted layers (5) is correct, however, their names are wrong. Also, they don’t seem to match activations if I extract them layer by layer.