Open dummyuser-123 opened 9 months ago
well you can move following lists from facexlib/face_restoration_helper.py
to enhance function
in gfpgan/utils.py, it will solve the problem because every request will have its own list and wont be mixed again.
self.all_landmarks_5 = []
self.det_faces = []
self.affine_matrices = []
self.inverse_affine_matrices = []
self.cropped_faces = []
self.restored_faces = []
self.pad_input_imgs = []
but the problem I am getting after doing it is that the quality is low when processing concurrent requests, any ideas ?
Thanks for the answer, I have already implemented this logic in my code. And I am not facing any quality issue during concurrent requests. If possible, can you tell me briefly about which api framework you are using and how you are able to implement concurrent requests in that framework, so that I can get more idea about the problem.
I am using waitress, and the issue is that when I send 2 requests at the same time the first works just fine but the output of second image is not good,
check the difference between the two images the first one is when send with another image simultaneously and the second one is when I send it alone
First of all, I have never used waitress for api development. But I can tell you some general point that you can check:
Also, have you worked on fastapi ever for api creation ? Actually I have used fastapi for this model but I don't know I am not able to achieve parallelism for more users. So, do you have any idea about this problem ?
nope, I never tried fast api sorry, when I send request from a single device it works perfectly, no matter how many faces in it, the problem only occur when it is working on multiple images at the same time, if I send 2 concurrent images with 1 face each, it works far better, kindly try 2 concurrent requests with multiple faces in the images and make sure they are different images. How did you find out it is not running concurrently ? like the time it took or something else ?
I have created an API for Real-ESRGAN using FastAPI, and it is working properly for multiple user requests. However, when I am initially loading the models (Real-ESRGAN and GFPGAN) using lru_cache (functools) to decrease the inference time, I am encountering following two errors during execution.
1. Sometimes I have getting faces of one user request mixed up with another user request.
2. In some requests, I have getting following error.
This is the small code snippet from my api:
So, when I went through the code of GFPGAN, I found that GFPGANer contains an "enhance" function which calls the "facexlib" library for face enhancement and face-related operations. The "enhance" function clears all list variables of "facexlib" after every execution by reinitializing them. This type of behavior is only observed when I load the model into the cache; otherwise, it works properly. Is there any way to cache the model and also resolve this error?