Closed bfirsh closed 3 years ago
Hi @bfirsh! The changes look good so I'm merging your changes. I'm interested in seeing how these changes will affect the response time on Replicate. I'll also try to make similar changes to my ReStyle demo :)
Hey @bfirsh ,
Although I merged your update, I am playing around with further optimizing the predict
function. Currently, we are still performing the following for every predict
call:
https://github.com/yuval-alaluf/SAM/blob/8d1c4b3c76ec0faf60b7c23c8cf1734dea2e1a45/predict.py#L45-L48
Although this takes only a few seconds, since it's always the same net
each time I thought about moving this to the setup
function so we'll have something like:
model_path = "pretrained_models/sam_ffhq_aging.pt"
ckpt = torch.load(model_path, map_location="cpu")
opts = ckpt["opts"]
opts["checkpoint_path"] = model_path
opts["device"] = "cuda" if torch.cuda.is_available() else "cpu"
self.shape_predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")
self.opts = Namespace(**opts)
self.net = pSp(self.opts)
self.net.eval()
if torch.cuda.is_available():
self.net.cuda()
But if I do this, I get a very weird behavior on Replicate:
It seems like it is stuck in a loop loading the model.
Do you know why something like this could happen?
Also, when I run cog predict
locally, this behavior doesn't occur. So I'm wondering if this is something that is happening "behind the scenes" that I simply can't see 🤔
Hello again @yuval-alaluf :)
This pull request does a few little things:
setup()
to make running predictions much faster/more efficient (we noticed this during the high traffic yesterday!)shape
being undefined, and it looked broken)If you run
cog push
after merging and pulling, it'll update the version on Replicate. 😄