cdpierse / transformers-interpret

Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
Apache License 2.0
1.3k stars 97 forks source link

Gpu usage #60

Closed subhamkhemka closed 3 years ago

subhamkhemka commented 3 years ago

Hi

How do I make sure I am utilising gpu ?

Code:

`from transformers import AutoModelForSequenceClassification, AutoTokenizer from transformers_interpret import ZeroShotClassificationExplainer

tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-mnli") model = AutoModelForSequenceClassification.from_pretrained("facebook/bart-large-mnli") zero_shot_explainer = ZeroShotClassificationExplainer(model, tokenizer)

%%time word_attributions = zero_shot_explainer( "Today apple released the new Macbook showing off a range of new features found in the proprietary silicon chip computer. ", labels = ["finance", "technology", "sports"], )`

CPU times: user 2min 47s, sys: 3.41 s, total: 2min 51s Wall time: 1min 26s

When I run nvidia-smi, it does not show any usage. Is there some other command to check gpu usage ?

Tue Aug 3 05:59:18 2021 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 450.119.03 Driver Version: 450.119.03 CUDA Version: 11.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla K80 On | 00000000:00:1E.0 Off | 0 | | N/A 36C P8 29W / 149W | 3MiB / 11441MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+

Server has 4vcpu and 64gb ram along with the above gpu.

torch.version - '1.8.1+cu111' torch.cuda.is_available() - True

Thanks, Subham

@subhamkhemka

cdpierse commented 3 years ago

Hi @subhamkhemka if you are looking to see if the explainer is utilizing the gpu you can check that via zero_shot_explainer.device this should dictate whether the device in use is GPU or CPU. Thanks.

subhamkhemka commented 3 years ago

@cdpierse

zero_shot_explainer.device
device(type='cpu')

How do I make it use gpu ?

cdpierse commented 3 years ago

Hi @subhamkhemka to make sure that your model is on GPU assuming your machine has a compatible GPU available I believe you can call model.cuda() or model.to("cuda") to move the model onto the GPU.

subhamkhemka commented 3 years ago

Working now, thanks @cdpierse