MertKalkanci / Highlights-Maker

A video highlights creator
GNU General Public License v3.0
4 stars 0 forks source link

AttributeError: '_io.BufferedRandom' object has no attribute 'endswith' #3

Closed KalvinThien closed 5 months ago

KalvinThien commented 5 months ago

I ran main.py and had the Running on local URL interface: http://127.0.0.1:7860

But when selecting a video and filling in keywords, then the error

image


Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Video path: <tempfile._TemporaryFileWrapper object at 0x0000021659C2BE90>
Keywords: ['size ', ' over']
Length: 30
Skip Rate: 30
Temperature: 0.35
Language: EN
Traceback (most recent call last):
  File "C:\Users\KalvinThien\AppData\Local\Programs\Python\Python311\Lib\site-packages\gradio\routes.py", line 394, in run_predict
    output = await app.get_blocks().process_api(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\KalvinThien\AppData\Local\Programs\Python\Python311\Lib\site-packages\gradio\blocks.py", line 1075, in process_api
    result = await self.call_function(
             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\KalvinThien\AppData\Local\Programs\Python\Python311\Lib\site-packages\gradio\blocks.py", line 884, in call_function
    prediction = await anyio.to_thread.run_sync(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\KalvinThien\AppData\Local\Programs\Python\Python311\Lib\site-packages\anyio\to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\KalvinThien\AppData\Local\Programs\Python\Python311\Lib\site-packages\anyio\_backends\_asyncio.py", line 2144, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "C:\Users\KalvinThien\AppData\Local\Programs\Python\Python311\Lib\site-packages\anyio\_backends\_asyncio.py", line 851, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "d:\Ai_AUto_crop\Highlights-Maker-main\Highlights-Maker-main\highlight.py", line 91, in highlight
    video = VideoFileClip(videopath)
            ^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\KalvinThien\AppData\Local\Programs\Python\Python311\Lib\site-packages\moviepy\video\io\VideoFileClip.py", line 88, in __init__
    self.reader = FFMPEG_VideoReader(filename, pix_fmt=pix_fmt,
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\KalvinThien\AppData\Local\Programs\Python\Python311\Lib\site-packages\moviepy\video\io\ffmpeg_reader.py", line 35, in __init__
    infos = ffmpeg_parse_infos(filename, print_infos, check_duration,
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\KalvinThien\AppData\Local\Programs\Python\Python311\Lib\site-packages\moviepy\video\io\ffmpeg_reader.py", line 244, in ffmpeg_parse_infos
    is_GIF = filename.endswith('.gif')
             ^^^^^^^^^^^^^^^^^
  File "C:\Users\KalvinThien\AppData\Local\Programs\Python\Python311\Lib\tempfile.py", line 478, in __getattr__
    a = getattr(file, name)
        ^^^^^^^^^^^^^^^^^^^
AttributeError: '_io.BufferedRandom' object has no attribute 'endswith'``` 
MertKalkanci commented 5 months ago

Hey there, I think the problem you had happens because you use the 1.0.0 release, I've fixed some problems in newer commits but not made a release.

I made a new release 1.1.0, and updated readme.md.

I hope you won't have the same errors in the new release. I am not closing this issue to incase you face another problem. If you don't face any problem in new release please close the issue. Have a great day.

KalvinThien commented 5 months ago

C:\Users\KalvinThien\AppData\Local\Programs\Python\Python311\Lib\site-packages\requests\__init__.py:102: RequestsDependencyWarning: urllib3 (1.26.18) or chardet (5.2.0)/charset_normalizer (2.0.12) doesn't match a supported version!
  warnings.warn("urllib3 ({}) or chardet ({})/charset_normalizer ({}) doesn't match a supported "
Traceback (most recent call last):
  File "d:\Ai_AUto_crop\Highlights-Maker-main\Highlights-Maker-main\main.py", line 12, in <module>
    file_path = gr.FileExplorer(label="Video",file_count="single", glob="*/*.mp4")
                ^^^^^^^^^^^^^^^
AttributeError: module 'gradio' has no attribute 'FileExplorer'``` 

I'm always in trouble exactly, haha. I asked ChatGPT to fix the error for me.

however, I would like to ask more, in addition to using OpenAI, can Gemini be used instead of it?
Since the 3.5 model is also not really as good as the Gemini-Pro-001
MertKalkanci commented 5 months ago

actually I'm not sure how to fix gradio errors, try to update packages in the virual environment, it's possible to add gemini support by changing ai.py and main.py to add selections to UI.

Could you try to update packages ? Also how did you fix it in the other issue you made ? Maybe same thing can be a solution to this problem again ?

KalvinThien commented 5 months ago

main.py :


import gradio as gr
from highlight import highlight
from youtube import download

languages = ["EN","VI"]
download_status = 0
generation_status = 0

with gr.Blocks() as interface:
    with gr.Tab(label="Highlight"):
        file_path = gr.File(label="Video", file_types=[".mp4"], file_count="single")
        temperature = gr.Slider(minimum=0, maximum=2,value=0.35, step=0.05, label="AI Temperature")
        length = gr.Slider(minimum=18, maximum=60, step=3, label="Dialogue Length", value=30, info="How many sentences AI will process at once (higher = faster generation & less chance of getting rate limited by openai)")
        language = gr.Dropdown(languages, label="Language")
        keywords = gr.Textbox(label="Keywords", placeholder="viral, funny, highlights", info="write diffrent keywords comma separated")
        ai = gr.Dropdown(["OPENAI","LOCAL GGUF"], label="AI to interpret the video script")
        ai_path = gr.File(label="Local AI Model", file_types=[".gguf"], file_count="single")
        generate_button = gr.Button(value="Generate")
        is_generated = gr.Textbox(label="Is Generated?", interactive=False)
    with gr.Tab(label="Download"):
        link = gr.Textbox(label="Link")
        download_button = gr.Button(value="Download")
        out_video = gr.Video(label="Video")

    download_button.click(fn=download, inputs=link, outputs=out_video)
    generate_button.click(fn=highlight, inputs=[file_path, temperature, length, language, keywords, ai, ai_path], outputs=is_generated)

is_shared = len(sys.argv) > 1 and sys.argv[1] == "--share"

interface.launch(share=is_shared)``` 

. but im get a new trouble 

The GUI has said complete, but there are no files in the output directory 

this is ai.py :
```import whisper
import openai
from openai import OpenAI
from llama_cpp import Llama

# Load the Whisper model
model_audio = whisper.load_model("base")

# Define the llm_manager class
class llm_manager():
    def __init__(self, llm="OPENAI", path=""):
        if llm == "OPENAI":
            # Set the OpenAI API key (use your own key)
            api_key = "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"
            self.client = OpenAI(api_key=api_key)
            self.type = "OPENAI"
        elif llm == "LOCAL GGUF":
            # Load the local GGUF model
            self.llm = Llama(path)
            self.type = "LOCAL GGUF"

    def generate(self, system, chat, temperature):
        if self.type == "OPENAI":
            return self.gpt(system, chat, temperature)
        elif self.type == "LOCAL GGUF":
            return self.llama(system, chat)

    def gpt(self, system, chat, temperature):
        # Create the GPT chat completions using the OpenAI API
        response = self.client.chat.completions.create(
            model="gpt-3.5-turbo-1106",
            temperature=temperature,
            messages=[
                {"role": "system", "content": system},
                {"role": "user", "content": chat},
            ]
        )
        return response.choices[0].message.content

    def llama(self, system, chat):
        # Generate a response using the local GGUF llm
        return self.llm(f"System: \n{system}\nUser: \n{chat}")["choices"][0]["text"]

# Define the transcribe function
def trasncribe(video_path):
    return model_audio.transcribe(video_path)``` 

![image](https://github.com/MertKalkanci/Highlights-Maker/assets/120565705/887e9f9f-0468-43b1-bba9-13d2b1eb6308)
MertKalkanci commented 5 months ago

In another repositorys issue I've found this:

This is usually caused by gradio not being installed correctly in the python environment.

Can you try pip uninstall gradio and then rerun pip install gradio and let us know if you are still seeing this?

Also which python version you are using ?

KalvinThien commented 5 months ago

I have fixed it, now I edit it, because mainly the trend is Video Shorts (1080px * 1920 px) also known as 9:16, so I want to add 1 option when ticking to crop the video and the right part needs to be highlighted?

MertKalkanci commented 5 months ago

yes you can add it, create a pull request I will accept it.

KalvinThien commented 5 months ago

Sorry if you misunderstood, I mean it's just an idea. Here I have a code that will crop the video and crop the face to crop

Can you take a look at it and integrate it into the app? https://github.com/NisaarAgharia/AI-Shorts-Creator/blob/main/autocropper.py

MertKalkanci commented 5 months ago

Ok I will have a look onto it.