lucidrains / perceiver-ar-pytorch

Implementation of Perceiver AR, Deepmind's new long-context attention network based on Perceiver architecture, in Pytorch
MIT License
86 stars 4 forks source link

About music generation with perceiver-ar model #3

Open feizc opened 2 years ago

feizc commented 2 years ago

Hi, @lucidrains

Thanks for the implementation of Perceiver-AR model. We conduct the experiments on pop music generation at: https://github.com/feizc/Perceiver-Music-Generation. The results are encouraging, be grateful to you : )

lucidrains commented 2 years ago

🎶🤖😄

lucidrains commented 2 years ago

@feizc how are you approaching the problem of generating starting from a length that is less than the prefix?

feizc commented 2 years ago

@feizc how are you approaching the problem of generating starting from a length that is less than the prefix?

Actually, I use a fixed length of conditional context, i.e., prefix length of prior music, to continue writing the next melody.

In my opinion, to start from zero, we can use special token like [pad] to supplement the prefix length, or only use decoder to generate an initial sentence then generate conditioned on latents.

I read the source code and find the author begin with zero :)


def gen_initial_events(): 

> events = np.zeros([device_count, batch_size, max_events_length], np.int32)

> events[:, :, 0] = dataset.SOS_ID 

> return events
usryokousha commented 2 years ago

After reviewing the current implementation (autoregressive_wrapper) it seems you generate each subsequent token one at a time as would be the case in most architectures. The authors of the perceiver-ar paper outlined a strided approach (typically the size of the self-attention sequence length) where the sampled tokens would be cached up to a certain size and then the buffer would be freed. Have you considered implementing this? The actual released implementation perceiver-ar is relatively easy to follow.

lucidrains commented 2 years ago

After reviewing the current implementation (autoregressive_wrapper) it seems you generate each subsequent token one at a time as would be the case in most architectures. The authors of the perceiver-ar paper outlined a strided approach (typically the size of the self-attention sequence length) where the sampled tokens would be cached up to a certain size and then the buffer would be freed. Have you considered implementing this? The actual released implementation perceiver-ar is relatively easy to follow.

noo not yet, i haven't implemented their special caching strategy at inference

but if i keep hearing more positive results, i may implement it! have to admit i was doubtful about the architecture initially

usryokousha commented 2 years ago

I’m curious to see how well this would work at inference, particularly when using a vqvae / vqgan to encode images. If you could decode in only several steps that would really speed up generation. I suspect quality would suffer, but the paper’s results seem promising w.r.t. to the ImageNet results.