chathasphere / pno-ai

Music Transformer Sequence Generation in Pytorch
MIT License
102 stars 29 forks source link

A few things I had to do to get this working. #34

Open BShennette opened 3 years ago

BShennette commented 3 years ago

Hi. I was curious in trying your Music Transformer implementation, but I had to make some small changes to get it working. Perhaps others may find some use from this information, and you might have insight.

First, in "train.py" line 147, there is a previously undeclared variable mask, which I believe should be x_mask.

Second, "generate.py" expects a "model.yaml" file that is not generated by "run.py": this was bypassed by directly setting the MusicTransformer model the same as in "run.py", and directly setting model_path to one of the checkpoints saved by "run.py"

Third, in "helpers.py", changed the line: input_tensor = torch.LongTensor(input_sequence).unsqueeze(0) into model.cuda() input_tensor = torch.cuda.LongTensor(input_sequence).unsqueeze(0) To fix an issue of mismatch between CPU and CUDA that arose in transformer.forward()

And lastly, because I was running this locally, I decreased batch size in run.py to fit my GPU.

Thanks.