dmMaze / BallonsTranslator

深度学习辅助漫画翻译工具, 支持一键机翻和简单的图像/文本编辑 | Yet another computer-aided comic/manga translation tool powered by deeplearning
GNU General Public License v3.0
2.64k stars 176 forks source link

DeepL Translation #12

Closed AstrohGalaxy closed 2 years ago

AstrohGalaxy commented 2 years ago

It won't allow me to translate to English using DeepL it says ('target_lang="EN" is deprecated, please use "EN-GB" or "EN-US" instead.'). For DeepL you need to use either English British or English American for it to work.

dmMaze commented 2 years ago

This should be fixed by #5 & #6, but not made into a release yet.
You can wait for the next release, or run the source code:

# First, you need to have Python(>=3.8) installed on your system.
$ python --version

# Clone this repo
$ git clone https://github.com/dmMaze/BallonsTranslator.git

# Install the dependencies
$ pip install -r requirements.txt

download the data folder from https://drive.google.com/drive/folders/1uElIYRLNakJj-YS0Kd3r3HE-wzeEvrWd?usp=sharing and move into BallonsTranslator/ballontranslator, finally run

python ballontranslator/__main__.py
Snowad14 commented 2 years ago

@dmMaze There is another problem with deepl is that the Japanese -> English translation does not work because of this check made image

dmMaze commented 2 years ago

@dmMaze There is another problem with deepl is that the Japanese -> English translation does not work because of this check made image

It seems the translate_text method of deepl accept List[str] so we have no need to manually split&concate, set concate_text = False like below if it works (pass above assertion) please make a pull request (.

@register_translator('Deepl')
class DeeplTranslator(TranslatorBase):

    concate_text = False
    setup_params: Dict = {
        'api_key': ''
    }
...
Snowad14 commented 2 years ago

I had already tried but I just found out how to do it

ROKOLYT commented 2 years ago

For some reason on my end, the issue still exists. Deepl works fine with all the languages except for English. I confirmed that I have the newest version with en-US implemented. deepl implementation in dl\translators__init__.py

@register_translator('Deepl')
class DeeplTranslator(TranslatorBase):

    concate_text = False
    setup_params: Dict = {
        'api_key': ''
    }
def _setup_translator(self):
        self.lang_map['English'] = 'EN-US'
def _translate(self, text: Union[str, List]) -> Union[str, List]:
        api_key = self.setup_params['api_key']
        translator = deepl.Translator(api_key)
        source = self.lang_map[self.lang_source]
        target = self.lang_map[self.lang_target]
        if source == 'EN-US':
            source = "EN"
        result = translator.translate_text(text, source_lang=source, target_lang=target)
        return [i.text for i in result]

I still get the issue that started the thread

It won't allow me to translate to English using DeepL it says ('target_lang="EN" is deprecated, please use "EN-GB" or "EN-US" instead.'). For DeepL you need to use either English British or English American for it to work.

Snowad14 commented 2 years ago

With Japanese -> English ?

ROKOLYT commented 2 years ago

Yes. Also, I'd like to mention that I couldn't find a way to get PyQt5<=5.15.4 installed. So I've installed a newer version.

Snowad14 commented 2 years ago

Can you give me your copies of the images? All works well for me

ROKOLYT commented 2 years ago

single Everything JP -> ENG fails

Snowad14 commented 2 years ago

179868062-10ee6a31-9c64-4ab2-934a-c24054019336 works perfectly

ROKOLYT commented 2 years ago

Have you modified any files? I downloaded the latest release from google drive (ver 1.2.0)

Snowad14 commented 2 years ago

Have you modified any files? I downloaded the latest release from google drive (ver 1.2.0)

No, It's on your side, try to re-download the repo and make a virtual python environment with : python -m venv .venv then activate it : cd .venv/Scripts , activate.bat and and reinstall the requirements

Snowad14 commented 2 years ago

Have you modified any files? I downloaded the latest release from google drive (ver 1.2.0)

From what you say, you use the drive version which is not up to date so the error is normal, you need to clone the repo and run __main__.py

ROKOLYT commented 2 years ago

I tried it and got the following error:

[INFO   ] import_utils:<module>:50 - PyTorch version 1.12.0+cu116 available.
[INFO   ] import_utils:<module>:50 - PyTorch version 1.12.0+cu116 available.
PyTorch version 1.12.0+cu116 available.
Traceback (most recent call last):
  File "C:\Users\jassz\BallonsTranslator\ballontranslator\__main__.py", line 44, in <module>
    main()
  File "C:\Users\jassz\BallonsTranslator\ballontranslator\__main__.py", line 38, in main
    ballontrans = MainWindow(app, open_dir=args.proj_dir)
  File "C:\Users\jassz\BallonsTranslator\ballontranslator\ui\mainwindow.py", line 45, in __init__
    self.setupUi()
  File "C:\Users\jassz\BallonsTranslator\ballontranslator\ui\mainwindow.py", line 62, in setupUi
    self.leftBar = LeftBar(self)
  File "C:\Users\jassz\BallonsTranslator\ballontranslator\ui\mainwindowbars.py", line 197, in __init__
    vlayout.setContentsMargins(padding, 0, padding, btn_width/2)
TypeError: arguments did not match any overloaded call:
  setContentsMargins(self, int, int, int, int): argument 4 has unexpected type 'float'
  setContentsMargins(self, QMargins): argument 1 has unexpected type 'int'
Snowad14 commented 2 years ago

Try it on python version 3.8

ROKOLYT commented 2 years ago

So I did everything you told me to on python 3.8.10 and copied models and libs from google drive (as they aren't included in the repo). In the end, I got an error I had not seen before. (It crashed)

[INFO   ] dl_manager:on_finish_settranslator:645 - Translator set to Deepl
Traceback (most recent call last):
  File "C:\Users\jassz\ballonstranslator\ballontranslator\ui\dl_manager.py", line 362, in run
    self.job()
  File "C:\Users\jassz\ballonstranslator\ballontranslator\ui\dl_manager.py", line 305, in _imgtrans_pipeline
    mask, blk_list = self.textdetector.detect(img)
  File "C:\Users\jassz\ballonstranslator\ballontranslator\dl\textdetector\__init__.py", line 84, in detect
    _, mask, blk_list = self.detector(img)
  File "C:\Users\jassz\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\jassz\ballonstranslator\ballontranslator\dl\textdetector\ctd\inference.py", line 178, in __call__
    mask = cv2.resize(mask, (im_w, im_h), interpolation=cv2.INTER_LINEAR)
cv2.error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\imgproc\src\resize.cpp:3689: error: (-215:Assertion failed) !dsize.empty() in function 'cv::hal::resize'
dmMaze commented 2 years ago

So I did everything you told me to on python 3.8.10 and copied models and libs from google drive (as they aren't included in the repo). In the end, I got an error I had not seen before. (It crashed)

[INFO   ] dl_manager:on_finish_settranslator:645 - Translator set to Deepl
Traceback (most recent call last):
  File "C:\Users\jassz\ballonstranslator\ballontranslator\ui\dl_manager.py", line 362, in run
    self.job()
  File "C:\Users\jassz\ballonstranslator\ballontranslator\ui\dl_manager.py", line 305, in _imgtrans_pipeline
    mask, blk_list = self.textdetector.detect(img)
  File "C:\Users\jassz\ballonstranslator\ballontranslator\dl\textdetector\__init__.py", line 84, in detect
    _, mask, blk_list = self.detector(img)
  File "C:\Users\jassz\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\jassz\ballonstranslator\ballontranslator\dl\textdetector\ctd\inference.py", line 178, in __call__
    mask = cv2.resize(mask, (im_w, im_h), interpolation=cv2.INTER_LINEAR)
cv2.error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\imgproc\src\resize.cpp:3689: error: (-215:Assertion failed) !dsize.empty() in function 'cv::hal::resize'

Delete generated .json file and run it again.
If it didn't work, please upload a copy of the image which caused the crash.

ROKOLYT commented 2 years ago

Deleting the .json file didn't work. If it matters, I'd like to mention that for some reason everything is set to CPU and I cannot change it to Cuda single .

ROKOLYT commented 2 years ago

Ok, so torch was installed without cuda for some reason. After reinstalling it like this

pip uninstall torch
pip cache purge
pip install torch -f https://download.pytorch.org/whl/torch_stable.html

I can select cuda but it still crashes. Traceback while using CPU is the same, and while using cuda is:

Traceback (most recent call last):
  File "C:\Users\jassz\BallonsTranslator\ballontranslator\ui\dl_manager.py", line 362, in run
    self.job()
  File "C:\Users\jassz\BallonsTranslator\ballontranslator\ui\dl_manager.py", line 305, in _imgtrans_pipeline
    mask, blk_list = self.textdetector.detect(img)
  File "C:\Users\jassz\BallonsTranslator\ballontranslator\dl\textdetector\__init__.py", line 84, in detect
    _, mask, blk_list = self.detector(img)
  File "C:\Users\jassz\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\jassz\BallonsTranslator\ballontranslator\dl\textdetector\ctd\inference.py", line 169, in __call__
    blks = postprocess_yolo(blks, self.conf_thresh, self.nms_thresh, resize_ratio)
  File "C:\Users\jassz\BallonsTranslator\ballontranslator\dl\textdetector\ctd\inference.py", line 106, in postprocess_yolo
    det = non_max_suppression(det, conf_thresh, nms_thresh)[0]
  File "C:\Users\jassz\BallonsTranslator\ballontranslator\dl\textdetector\yolov5\yolov5_utils.py", line 202, in non_max_suppression
    i = torchvision.ops.nms(boxes, scores, iou_thres)  # NMS
  File "C:\Users\jassz\AppData\Local\Programs\Python\Python38\lib\site-packages\torchvision\ops\boxes.py", line 41, in nms
    return torch.ops.torchvision.nms(boxes, scores, iou_threshold)
  File "C:\Users\jassz\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\_ops.py", line 143, in __call__
    return self._op(*args, **kwargs or {})
NotImplementedError: Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::nms' is only available for these backends: [Dense, Negative, UNKNOWN_TENSOR_TYPE_ID, QuantizedXPU, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseCPU, SparseCUDA, SparseHIP, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseVE, UNKNOWN_TENSOR_TYPE_ID, NestedTensorCUDA, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID].

CPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\cpu\nms_kernel.cpp:112 [kernel]
QuantizedCPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\quantized\cpu\qnms_kernel.cpp:125 [kernel]
BackendSelect: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\PythonFallbackKernel.cpp:133 [backend fallback]
Named: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\ConjugateFallback.cpp:18 [backend fallback]
Negative: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\NegateFallback.cpp:18 [backend fallback]
ZeroTensor: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:64 [backend fallback]
AutogradOther: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:35 [backend fallback]
AutogradCPU: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:39 [backend fallback]
AutogradCUDA: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:47 [backend fallback]
AutogradXLA: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:51 [backend fallback]
AutogradMPS: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:59 [backend fallback]
AutogradXPU: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:43 [backend fallback]
AutogradHPU: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:68 [backend fallback]
AutogradLazy: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:55 [backend fallback]
Tracer: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\torch\csrc\autograd\TraceTypeManual.cpp:295 [backend fallback]
AutocastCPU: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\autocast_mode.cpp:481 [backend fallback]
Autocast: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\autocast_mode.cpp:324 [backend fallback]
Batched: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\BatchingRegistrations.cpp:1064 [backend fallback]
VmapMode: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
Functionalize: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\FunctionalizeFallbackKernel.cpp:89 [backend fallback]
PythonTLSSnapshot: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\PythonFallbackKernel.cpp:137 [backend fallback]
dmMaze commented 2 years ago

cv2.error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\imgproc\src\resize.cpp:3689: error: (-215:Assertion failed) !dsize.empty() in function 'cv::hal::resize'

try install opencv-python==4.5.*

dmMaze commented 2 years ago

Ok, so torch was installed without cuda for some reason. After reinstalling it like this

pip uninstall torch
pip cache purge
pip install torch -f https://download.pytorch.org/whl/torch_stable.html

I can select cuda but it still crashes. Traceback while using CPU is the same, and while using cuda is:

Traceback (most recent call last):
  File "C:\Users\jassz\BallonsTranslator\ballontranslator\ui\dl_manager.py", line 362, in run
    self.job()
  File "C:\Users\jassz\BallonsTranslator\ballontranslator\ui\dl_manager.py", line 305, in _imgtrans_pipeline
    mask, blk_list = self.textdetector.detect(img)
  File "C:\Users\jassz\BallonsTranslator\ballontranslator\dl\textdetector\__init__.py", line 84, in detect
    _, mask, blk_list = self.detector(img)
  File "C:\Users\jassz\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\jassz\BallonsTranslator\ballontranslator\dl\textdetector\ctd\inference.py", line 169, in __call__
    blks = postprocess_yolo(blks, self.conf_thresh, self.nms_thresh, resize_ratio)
  File "C:\Users\jassz\BallonsTranslator\ballontranslator\dl\textdetector\ctd\inference.py", line 106, in postprocess_yolo
    det = non_max_suppression(det, conf_thresh, nms_thresh)[0]
  File "C:\Users\jassz\BallonsTranslator\ballontranslator\dl\textdetector\yolov5\yolov5_utils.py", line 202, in non_max_suppression
    i = torchvision.ops.nms(boxes, scores, iou_thres)  # NMS
  File "C:\Users\jassz\AppData\Local\Programs\Python\Python38\lib\site-packages\torchvision\ops\boxes.py", line 41, in nms
    return torch.ops.torchvision.nms(boxes, scores, iou_threshold)
  File "C:\Users\jassz\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\_ops.py", line 143, in __call__
    return self._op(*args, **kwargs or {})
NotImplementedError: Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::nms' is only available for these backends: [Dense, Negative, UNKNOWN_TENSOR_TYPE_ID, QuantizedXPU, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseCPU, SparseCUDA, SparseHIP, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseVE, UNKNOWN_TENSOR_TYPE_ID, NestedTensorCUDA, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID].

CPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\cpu\nms_kernel.cpp:112 [kernel]
QuantizedCPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\quantized\cpu\qnms_kernel.cpp:125 [kernel]
BackendSelect: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\PythonFallbackKernel.cpp:133 [backend fallback]
Named: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\ConjugateFallback.cpp:18 [backend fallback]
Negative: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\NegateFallback.cpp:18 [backend fallback]
ZeroTensor: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:64 [backend fallback]
AutogradOther: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:35 [backend fallback]
AutogradCPU: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:39 [backend fallback]
AutogradCUDA: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:47 [backend fallback]
AutogradXLA: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:51 [backend fallback]
AutogradMPS: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:59 [backend fallback]
AutogradXPU: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:43 [backend fallback]
AutogradHPU: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:68 [backend fallback]
AutogradLazy: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\VariableFallbackKernel.cpp:55 [backend fallback]
Tracer: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\torch\csrc\autograd\TraceTypeManual.cpp:295 [backend fallback]
AutocastCPU: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\autocast_mode.cpp:481 [backend fallback]
Autocast: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\autocast_mode.cpp:324 [backend fallback]
Batched: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\BatchingRegistrations.cpp:1064 [backend fallback]
VmapMode: fallthrough registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
Functionalize: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\FunctionalizeFallbackKernel.cpp:89 [backend fallback]
PythonTLSSnapshot: registered at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\core\PythonFallbackKernel.cpp:137 [backend fallback]

try newest pytorch pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116

ROKOLYT commented 2 years ago

So I've installed opencv-python==4.5.* and pytorch pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116 and it works perfectly. For anyone having similar issues here is everything I did:

dmMaze commented 2 years ago

resolved in v1.3.0