huggingface / transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
132.25k stars 26.34k forks source link

BertForSequenceClassification does not support 'device_map':"auto" yet #25296

Open goodaytar opened 1 year ago

goodaytar commented 1 year ago

System Info

I have trained a model and am now trying to load and quantise it but getting the error:

BertForSequenceClassification does not support 'device_map':"auto" yet

Code for loading is simply: model = AutoModelForSequenceClassification.from_pretrained(model_dir, device_map='auto', load_in_8bit=True)

Help would be greatly appreciated!

Thanks,

Lee

Who can help?

No response

Information

Tasks

Reproduction

model = AutoModelForSequenceClassification.from_pretrained(model_dir, device_map='auto', load_in_8bit=True)

Expected behavior

The model would load and be usable.

amyeroberts commented 1 year ago

Hi @goodaytar, thanks for raising this issue!

Yes, the BERT models don't support the use of device_map=xxx yet. In the full error message, you should have seen:

BertForSequenceClassification not support `device_map="auto"`. To implement support, the model class needs to implement the `_no_split_modules` attribute.

In order to enable this the _no_split_modules attribute needs to be implemented for the model. If you or anyone else in the community would like to open a PR to add this, we'd be very happy to review!

goodaytar commented 1 year ago

Thanks for the reply Amy. If you could give me a little bit more info on what needs adding, I'd be happy to.

Get Outlook for Androidhttps://aka.ms/AAb9ysg


From: amyeroberts @.> Sent: Friday, August 4, 2023 12:26:00 PM To: huggingface/transformers @.> Cc: goodaytar @.>; Mention @.> Subject: Re: [huggingface/transformers] BertForSequenceClassification does not support 'device_map':"auto" yet (Issue #25296)

Hi @goodaytarhttps://github.com/goodaytar, thanks for raising this issue!

Yes, the BERT models don't support the use of device_map=xxx yet. In the full error message, you should have seen:

BertForSequenceClassification not support device_map="auto". To implement support, the model class needs to implement the _no_split_modules attribute.

In order to enable this the _no_split_modules attribute needs to be implemented for the model. If you or anyone else in the community would like to open a PR to add this, we'd be very happy to review!

— Reply to this email directly, view it on GitHubhttps://github.com/huggingface/transformers/issues/25296#issuecomment-1665455659, or unsubscribehttps://github.com/notifications/unsubscribe-auth/APRZ52NZV7QNND7TR7TMJDTXTTL4RANCNFSM6AAAAAA3DCB7KY. You are receiving this because you were mentioned.Message ID: @.***>

amyeroberts commented 1 year ago

In order to know how to properly place the model onto difference devices, the models need to have _no_split_modules implemented in their PreTrainedModel class e.g. like here for Roberta.

For some modules, it's necessary to place all of the weights on the same device e.g. like Pix2StructVisionLayer for Pix2Struct.

In order to add, it'll be a case of iterating to find the modules that should be split or not. Once implemented, the accelerate tests should be run and pass. This should be tested with 1 and 2 GPUs.

goodaytar commented 1 year ago

And how do I find the modules that should be split or not?

amyeroberts commented 1 year ago

@goodaytar You'll need to experiment with the model to find out which modules should be split. I suggest starting with an empty list and looking at similar models to see how they set _no_split_modules.

You can inspect where the layers are allocated by using infer_auto_device_map:

device_map = infer_auto_device_map(model, no_split_module_classes=[])

The modules that can be added will be the layers defined in the modeling file e.g. "BertEmbeddings"

Once set, you can try running the accelerate tests (with GPUs!) to confirm the mapping works. If not, then inspect the device map.

github-actions[bot] commented 1 year ago

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

tanaymeh commented 1 year ago

Hi @amyeroberts, I would like to add the 'device_map': "auto" functionality to BERT Models!

amyeroberts commented 1 year ago

@tanaymeh Great! From next week, I'll be off for a few weeks. Please ping @younesbelkada for review in that time.

younesbelkada commented 1 year ago

@tanaymeh that would be really great, in few words, you just need to make sure to add the module names that contain any skip connection to avoid potential device mismatch issues Check for instance what has been done for RoBERTa here: https://github.com/huggingface/transformers/blob/main/src/transformers/models/roberta/modeling_roberta.py#L596

tanaymeh commented 1 year ago

@tanaymeh that would be really great, in a few words, you just need to make sure to add the module names that contain any skip connection to avoid potential device mismatch issues Check for instance what has been done for RoBERTa here: main/src/transformers/models/roberta/modeling_roberta.py#L596

That makes sense @younesbelkada! Will create a PR for this. One question: Will the CI tests on Github also test my implementation of device_map (with 1 and 2 GPUs) every time I push a commit?

younesbelkada commented 1 year ago

Hi @tanaymeh , Thanks, will look into it! The CI will not directly test it, we run "slow" tests every 24h on GPUs that will run those tests

dragstoll commented 11 months ago

@younesbelkada Hi Younes Could you make it work for xlm_roberta_xl too? Thanks Regards Dragan

Hambaobao commented 10 months ago

@younesbelkada Any updates? We can't wait to use this great feature.

tanaymeh commented 10 months ago

@Hambaobao I am working on the PR for this feature but waiting for a revert from @younesbelkada!

bp020108 commented 7 months ago

any update on this issue? or anyone fixed it?

Vsareen0 commented 6 months ago

Any update on this issue ?

lucasjinreal commented 6 months ago

ValueError: SiglipVisionModel does not support device_map='auto'

Same for Siglip?

amyeroberts commented 5 months ago

@lucasjinreal There are many models which don't yet have this enabled. I've opened a feature request to add this for vision and multimodal models which could have this added: #29786