Closed dellis23 closed 2 years ago
View the notebook here: https://github.com/dellis23/iree-torch/blob/5e4c29fcfb404b1de31e553572290d36223983ce/examples/bert.ipynb
@silvasean Do you have a quick explanation I can include for why we need to create the wrapped model? I'll update and remove the TODO if so.
I'd also like to add an example of the super small binary creation if possible (perhaps for embedded devices?). I've looked around but haven't seen any clear examples. Can you point me in the direction of any?
PyTorch models typically produce multiple outputs, some of them are only used during training, some are only important for statistics tracking. For this model, only the next-token "logits" (which are basically just the next-token probabilities) are relevant.
FYI, we usually squash all commits (and tidy up the resulting commit message) before merging to avoid local development steps from showing up in the mainline history.
FYI, we usually squash all commits (and tidy up the resulting commit message) before merging to avoid local development steps from showing up in the mainline history.
Ha, yeah I just messaged you about that. It looks like squash and merge is disabled on this repository. I know I can do it manually, but I think using the github button will retain the history in the PR and correctly squash in main.
The github button naively squashes the commit messages. We don't want intermediate messages like use_traced -> use_tracing
in the mainline history though, since they don't actually describe the final patch itself.
The github button naively squashes the commit messages. We don't want intermediate messages like
use_traced -> use_tracing
in the mainline history though, since they don't actually describe the final patch itself.
Ah, I see. I thought it let you modify the messages. Thanks, I'll do it via the command line.
Update README to point to new notebook. Minor README formatting changes.