Open elnath78 opened 1 month ago
Ok managed to update pip for the correct python version and get EasyOCR, but now I'm getting this warning and no result:
Neither CUDA nor MPS are available - defaulting to CPU. Note: This module is much faster with a GPU.
c:\python312\Lib\site-packages\easyocr\detection.py:78: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
net.load_state_dict(copyStateDict(torch.load(trained_model, map_location=device)))
c:\python312\Lib\site-packages\easyocr\recognition.py:169: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
state_dict = torch.load(model_path, map_location=device)
This is what I'm running:
import easyocr
import os
# Path to the directory containing frames
frames_dir = 'C:/honor/png'
# Path to save the extracted text files
output_dir = 'C:/honor/ocr'
# Create output directory if it doesn't exist
if not os.path.exists(output_dir):
os.makedirs(output_dir)
# Create an EasyOCR Reader instance
reader = easyocr.Reader(['en'])
# Loop through each frame in the directory
for filename in os.listdir(frames_dir):
if filename.endswith('.png') or filename.endswith('.jpg'): # Adjust based on your frame format
frame_path = os.path.join(frames_dir, filename)
# Perform OCR
results = reader.readtext(frame_path)
# Prepare the output text file path
output_file_path = os.path.join(output_dir, f'{os.path.splitext(filename)[0]}.txt')
# Open the file and write the extracted text
with open(output_file_path, 'w', encoding='utf-8') as text_file:
for result in results:
text_file.write(result[1] + '\n') # Write the text to file with newlines between results
print(f'Extracted text from {filename} saved to {output_file_path}')
I'm not much into python, I get some weird error, looks like I have more than one version so not sure how to run the correct one, here is what I run and the result:
So pip is fine however when I run I get this error:
It is kind of trolling me, I just run the command to ugrade pip and everything was fine.