Open Langhalsdino opened 2 years ago
@Langhalsdino Thanks for reporting the issue.
It seems that you have found the solution! Can you help to contribute by opening a pull request to do the fix? Thanks 😄
Thanks for this.
Final part looks like this:
if str(self.device) == 'mps':
output_img = output_img.flatten().reshape(
output_img.shape[2], output_img.shape[0], output_img.shape[1]
).transpose((1, 2, 0))
output_img = cv2.cvtColor(output_img, cv2.COLOR_RGB2BGR)
Applying the fix to the current master 5ca1078535923d485892caee7d7804380bfc87fd and executing the inference_realesrgan.py (without any parameters) with device set to "mps" processes the example images really fast. Unfortunately there are lines in the generated images. For the provided example images that contain alpha channels the image is still split into 3x3 tiles. Any ideas?
According to this issue(PyTorch MPS Backend), the problem occurs when moving data from cpu to mps. It can be solved by fixing img.to('mps')
to img.contiguous().to('mps')
.
What
When using the M1 mac on the current master commit e5763af5749430c9f7389f185cc53f90c4852ed5 and using the following environment environment-mac.yaml the resulting image is split into 3x3 tiles. This is due to an incorrect unreaveling for the M1 mac
mps
environment.This only occures for the
mps
device, if i choosecpu
everything works fine. So it probably due to another definition of rgb channel within the m1 (mps
) pytroch backend.How to reproduce / environment / ...
Error occurs in:
Everything works finde for:
environment-mac.yaml
Quick hack to solve if
Since only the dimensions are incorrect, loading the image and changing them solves the issue. Sadly i could not find what part of the
mps
backend screws the dimensions up. So its probably the best to fix them at the end if device ismps
.My mind gets always a bit messed up with dimensions and reshaping arrays, so i am using the quickfix. But adding a lines to the utils #L205 solves the issue permanently.