CUNY-CL / yoyodyne

Small-vocabulary sequence-to-sequence generation with optional feature conditioning
Apache License 2.0
25 stars 15 forks source link

SER on GPU #188

Closed kylebgorman closed 1 month ago

kylebgorman commented 1 month ago

I am getting errors when I try to compute SER on GPU. It looks like the prediction tensors need to be moved to CPU too before they're run through numpy.char.mod on lines 256, and 264 in addition to what's already done on line 245. (I don't know if they just need to be .cpu()ed or also .numpy()ed.)

Assigning (very gently) to Adam in case I have misunderstood the problem.

Adamits commented 1 month ago

Oh no! Good catch, I guess I forgot to test all of this on GPU.

I just noticed that there is a torch tensor char type, so I want to see if we can just use that.

Adamits commented 1 month ago

I also noticed that I did this correctly in one case, but not in all...

On a different note, I don't think we actually need to convert them to chars at all, I think we can just loop over tensors and compare ints. This seems to work. I am going to push and then test on GPU.

kylebgorman commented 1 month ago

Closed in #189.