OpenBioML / protein-lm-scaling

Other
55 stars 14 forks source link

Integrate ReRope position encoding #34

Closed pascalnotin closed 1 year ago

pascalnotin commented 1 year ago

Integrate ReRope position encoding and friends.

Reference implementations:

Implementation should be consistent with that of other position encoding schemes (eg., rotary, alibi).

Issue resolution should also include unit tests to validate position encodings are as expected.

talkhanz commented 1 year ago

hey @pascalnotin. I was wondering if we should add unit tests for these encoding schemes to check if our encoding is working as intended (maybe take a random vector and see how operations on that result in an as expected output?). We'll probably add more encoding schemes in the future so it would be nice if we contributors verify their contributions through some sort of unit tests?

talkhanz commented 1 year ago

it seems @NZ99 requested @csjackson0! in another thread here so its only fair I'll leave it up to him. @csjackson0 do let us know if you'd like to continue working on the FIM issue. I'll be working on this issue in the meantime while I hear back from @csjackson0! @pascalnotin There are a couple variants of rope here too that I can follow up if needed.

talkhanz commented 1 year ago

/take

pascalnotin commented 1 year ago

@talkhanz - sounds good re: your suggestions for unit tests and including the other rope variants as per the llama codebase as you suggested. Will edit the issue description. Thanks!

talkhanz commented 1 year ago

@talkhanz - sounds good re: your suggestions for unit tests and including the other rope variants as per the llama codebase as you suggested. Will edit the issue description. Thanks!

Thanks for accommodating my feedback. Time to get to work!

csjackson0 commented 1 year ago

@talkhanz thanks for checking in! I am working on FIM, but I am also happy to help with this issue.

talkhanz commented 1 year ago

@csjackson0 is working on the variants I suggested from the Llama and im working on rerope. Currently, the strategy we are thinking to test the encoding is to follow some ideas from the tests at hugging face here

To avoid merge conflicts, we are thinking of working in separate files for now, one for each variant. We can streamline things in 1 file once all variants are integrated?
How does that @pascalnotin ?

pascalnotin commented 1 year ago

Sounds good - thank you both!

talkhanz commented 1 year ago

@csjackson0 FYI i've made a PR incorporating rerope. hopefully it help us avoid PR conflicts for your other LLama rotary variants!

pascalnotin commented 1 year ago

Closing this issue given the merge of PR#35