Yale-LILY / AutoACU

11 stars 1 forks source link

Torch CPU Support #2

Open krmchal opened 6 months ago

krmchal commented 6 months ago

In the original code there were fragments that did not allow a user to run the code via CPU processing. In the new code I have added a simple Torch pattern as described in their migration guide:

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

With this addition added to each A2CU and A3CU class instantiations it allows a user to not have to pass any device information if they do not want or know how to. The code will auto detect and run the code as expected.

I also have made a few edits to the README file to make the install dependencies more verbose as I detected the need for SentencePiece was required due to the T5 Tokenizer class you implemented. I also added a few edits to make the sample Python code inserts provided to be valid in compilation when running (missing some commas).

Important:: I did not make any changes to the logic of your solution. I only added simplicity for users who just wish to download and run the code without knowledge of Torch under their belts to know CPU and GPU specifics.