Currently the usage for this configuration uses 17 GB of vram. This probably can be optimize much much more. When trying to implement the ROC curve and enabe the logits concatanation it used all my 24 Gb of vram. This should not be normal.
Configuration used.
model:
name: 'ResNet18' #the Name you want to give to the model
hugginface_model: "google/vit-base-patch16-224"
batch_size : 10 # Batch size for trainin. If you have problems with memory, you can use a lower batch size
local_model_path: ""
use_preprocessor: True
local_preprocessor: ""
enable_resize: True
resize_size: 224
dataset:
train_on_dataset: True # If train on dataset is true it will use the true labels from the dataset. If it is set to False
# it will run the model with the images and generate pseudo labels to use for training.
dataset_path: "mrm8488/ImageNet1K-val"
sample_number: 30 # Number of samples to use from the dataset for the evaluation
random_seed: 1 # IF you want t perform the test with the same data each time, set a random seed not equal to 0.
image_feature_title: "image" #Check on the specification of the dataset to see the name of the feature that contains the image
label_feature_title: "label"
embedding_models:
clip_model_enable : true
attack:
targeted: False
target_list: [(),()]
one_pixel:
enable_attack: True
steps: 1
pixels : 1
population_size: 10
Currently the usage for this configuration uses 17 GB of vram. This probably can be optimize much much more. When trying to implement the ROC curve and enabe the logits concatanation it used all my 24 Gb of vram. This should not be normal.
Configuration used.