yuval-alaluf / SAM

Official Implementation for "Only a Matter of Style: Age Transformation Using a Style-Based Regression Model" (SIGGRAPH 2021) https://arxiv.org/abs/2102.02754
https://yuval-alaluf.github.io/SAM/
MIT License
632 stars 151 forks source link

Better errors & faster Replicate predictions #36

Closed bfirsh closed 3 years ago

bfirsh commented 3 years ago

Hello again @yuval-alaluf :)

This pull request does a few little things:

If you run cog push after merging and pulling, it'll update the version on Replicate. 😄

yuval-alaluf commented 3 years ago

Hi @bfirsh! The changes look good so I'm merging your changes. I'm interested in seeing how these changes will affect the response time on Replicate. I'll also try to make similar changes to my ReStyle demo :)

yuval-alaluf commented 3 years ago

Hey @bfirsh , Although I merged your update, I am playing around with further optimizing the predict function. Currently, we are still performing the following for every predict call: https://github.com/yuval-alaluf/SAM/blob/8d1c4b3c76ec0faf60b7c23c8cf1734dea2e1a45/predict.py#L45-L48

Although this takes only a few seconds, since it's always the same net each time I thought about moving this to the setup function so we'll have something like:

        model_path = "pretrained_models/sam_ffhq_aging.pt"
        ckpt = torch.load(model_path, map_location="cpu")

        opts = ckpt["opts"]
        opts["checkpoint_path"] = model_path
        opts["device"] = "cuda" if torch.cuda.is_available() else "cpu"

        self.shape_predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")

        self.opts = Namespace(**opts)
        self.net = pSp(self.opts)
        self.net.eval()
        if torch.cuda.is_available():
            self.net.cuda()

But if I do this, I get a very weird behavior on Replicate:

Screen Shot 2021-09-11 at 14 26 51

It seems like it is stuck in a loop loading the model.
Do you know why something like this could happen?

yuval-alaluf commented 3 years ago

Also, when I run cog predict locally, this behavior doesn't occur. So I'm wondering if this is something that is happening "behind the scenes" that I simply can't see 🤔