TNTwise / Rife-Vulkan-Models

1 stars 0 forks source link

I want use these models on M1 mac #1

Closed bukhalmae145 closed 8 months ago

bukhalmae145 commented 8 months ago

I copied and pasted these models to rife-ncnn-vulkan mac version directory and executed it but got MemoryData Layer error code from rife model >4.6 (4.6 worked fine)

TNTwise commented 8 months ago

Sadly, you need memorydata to run models newer than 4.6. Even 4.6 requires memorydata, but nihui hacked it out somehow, I'm looking into conversion without memorydata, but the model ends up skewed. Does the x86 compilation of the macos build here https://github.com/TNTwise/rife-ncnn-vulkan/releases/tag/20240108 not work through Rosetta? I'll try to compile on arm, but I am told I need GitHub pro.

bukhalmae145 commented 8 months ago

안타깝게도 4.6보다 최신 모델을 실행하려면 메모리 데이터가 필요합니다. 4.6에도 메모리데이터가 필요한데 nihui가 어떻게든 해킹해서 메모리데이터 없이 변환하려고 하는데 모델이 비뚤어지네요. https://github.com/TNTwise/rife-ncnn-vulkan/releases/tag/20240108 에서 macos 빌드의 x86 컴파일이 Rosetta를 통해 작동하지 않습니까? arm에서 컴파일하려고 하는데 GitHub pro가 필요하다는 말을 들었습니다.

I downloaded and executed from macos version but I got this error message.

./rife-ncnn-vulkan -i input_frames -o output_frames -v -s 0 -n 5530 -m rife-v4.14 -f %08d.png -j 10:10:10 -g 0 dyld[34247]: Library not loaded: /usr/local/opt/vulkan-loader/lib/libvulkan.1.dylib Referenced from: <3CBFA0CE-7D02-3E62-9C66-DE97C0D49C08> /Users/workstation/Downloads/macos/rife-ncnn-vulkan Reason: tried: '/usr/local/opt/vulkan-loader/lib/libvulkan.1.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/usr/local/opt/vulkan-loader/lib/libvulkan.1.dylib' (no such file), '/usr/local/opt/vulkan-loader/lib/libvulkan.1.dylib' (no such file), '/usr/local/lib/libvulkan.1.dylib' (no such file), '/usr/lib/libvulkan.1.dylib' (no such file, not in dyld cache) [1] 34247 abort ./rife-ncnn-vulkan -i input_frames -o output_frames -v -s 0 -n 5530 -m -f -

TNTwise commented 8 months ago

Ok, I'm going to have to look into this more. Sadly I don't have a Mac, Ill see if I can reproduce the error in a VM. I would recommend trying wine + moltenvk to run the windows version under.

bukhalmae145 commented 8 months ago

알겠습니다. 좀 더 조사해 봐야겠습니다. 안타깝게도 Mac이 없습니다. VM에서 오류를 재현할 수 있는지 살펴보겠습니다. Windows 버전을 실행하려면 wine + moltenvk를 사용해 보는 것이 좋습니다.

what if I install vulkan SDK? Would it work?

TNTwise commented 8 months ago

That might work, I'm sorry as I'm not good with macos. If you want to interpolate now, you can run CPU inference.

bukhalmae145 commented 8 months ago

그럴 수도 있겠네요. 제가 macos를 잘 못해서 죄송합니다. 지금 보간하려면 CPU 추론을 실행할 수 있습니다.

But why the size of ./rife-ncnn-vulkan unix file from nihui's work and the link you gave me is different? (nihui's=25.8mb/yours=9.6mb)

and after I installed vulkan SDK, I got this error: ./rife-ncnn-vulkan -i input_frames -o output_frames -v -s 0 -n 5530 -m rife-v4.13 -f %08d.png -j 10:10:10 -g 0 vkCreateInstance failed -9 vkCreateInstance failed -9 invalid gpu device

TNTwise commented 8 months ago

I couldn't compile with arm, so I deleted it. You can try to recompile yourself following instructions in the repo README.

bukhalmae145 commented 8 months ago

arm으로 컴파일이 안되서 삭제했습니다. 저장소 README의 지침에 따라 직접 다시 컴파일해 볼 수 있습니다.

I found out that I can use it on cpu instead. But how do I compile it for arm?

TNTwise commented 8 months ago

그럴 수도 있겠네요. 제가 macos를 잘 못해서 죄송합니다. 지금 보간하려면 CPU 추론을 실행할 수 있습니다.

But why the size of ./rife-ncnn-vulkan unix file from nihui's work and the link you gave me is different? (nihui's=25.8mb/yours=9.6mb)

and after I installed vulkan SDK, I got this error: ./rife-ncnn-vulkan -i input_frames -o output_frames -v -s 0 -n 5530 -m rife-v4.13 -f %08d.png -j 10:10:10 -g 0 vkCreateInstance failed -9 vkCreateInstance failed -9 invalid gpu device

I'm assuming because the binary didn't include moltenvk for some reason.

TNTwise commented 8 months ago

arm으로 컴파일이 안되서 삭제했습니다. 저장소 README의 지침에 따라 직접 다시 컴파일해 볼 수 있습니다.

I found out that I can use it on cpu instead. But how do I compile it for arm?

Follow steps here:

https://github.com/TNTwise/rife-ncnn-vulkan#build-from-source

This should help with macos. https://github.com/Tencent/ncnn/wiki/how-to-build#build-for-macos

bukhalmae145 commented 8 months ago

그럴 수도 있어요. 나는 macos를 잘 못해서 존경합니다. 지금 보간하려면 CPU를 추천할 수 있습니다.

그런데 왜 nihui의 작업에 있는 ./rife-ncnn-vulkan unix 파일의 크기와 당신이 제공한 링크가 다른가요? (니후이=25.8mb/yours=9.6mb) vulkan SDK를 설치한 후 다음 오류가 발생했습니다. ./rife-ncnn-vulkan -i input_frames -o output_frames -v -s 0 -n 5530 -m rife-v4.13 -f %08d.png -j 10:10:10 -g 0 vkCreateInstance failed -9 vkCreateInstance failed -9 invalid gpu device

어떤 이유로 바이너리에 moltenvk가 포함되지 않았기 때문에 가정합니다.

The release note says:

`MoltenVK This SDK provides partial Vulkan support through the use of the MoltenVK library which is a "translation" or "porting" library that maps most of the Vulkan functionality to the underlying graphics support (via Metal) on macOS, iOS, and tvOS platforms. It is NOT a fully-conforming Vulkan driver for macOS, iOS, or tvOS devices.

There are two ways to make use of MoltenVK in your shipping Vulkan-based applications. One is to link directly to the MoltenVK static or dynamic library, which will give you direct access to the Vulkan API, and allows for some mixed use with Vulkan and low-level Metal capabilities. This option is not practical if you wish to maintain portability of your Vulkan rendering code across platforms. You will also sacrifice the ability to use the Vulkan Validation layers. However, this is currently the only way to use MoltenVK on mobile devices, and an XCFramework is provided as a static library that can be linked directly to your application. Please see the MoltenVK Runtime User Guide on the MoltenVK GitHub for more information about MoltenVK specifics.

For desktop applications, the recommended usage model is to use the MoltenVK dynamic library in conjunction with the Vulkan Loader. In this scenario the MoltenVK library takes on the role of the ICD from the point of view of the application and the Vulkan Loader. In this mode, you link only to the Vulkan Loader, and not the MoltenVK library directly. You will include the MoltenVK and the Vulkan Loader dynamic libraries in your application's bundle when distributing your software. When the Vulkan SDK is installed on macOS, these runtime components are placed in system directories and can be easily used during development without embedding them in the app bundle. Even if you decide to ship your application linked to the static MoltenVK library, during development we recommend you make use of the Vulkan Loader and Vulkan Validation Layers as they are a tremendous boon for debugging your Vulkan-based rendering code.

Regardless of the method used, your applications are distributed with everything they need to use Vulkan over Metal all included in your application's bundle. No additional system files or runtime components are needed by the end users of your applications.`

TNTwise commented 8 months ago

Tomorrow, I'll try to recompile it with vulkan and moltenvk. I'll see if I can compile an arm version too.

bukhalmae145 commented 8 months ago

Tomorrow, I'll try to recompile it with vulkan and moltenvk. I'll see if I can compile an arm version too.

It worked by installing old version of vulkan sdk which includes moltenvk and also worked with vapoysynth-rife-ncnn-vulkan models. But struggling with the scene change.

TNTwise commented 8 months ago

https://github.com/nihui/rife-ncnn-vulkan/issues/65

bukhalmae145 commented 8 months ago

nihui/rife-ncnn-vulkan#65

If I extract the scenes that are different by the ffmpeg command, what about the frames that are lacking?

TNTwise commented 8 months ago

If frames are missing, increase the sensitivity (use a lower number). Also, sorry to ask this, but can you provide a script of the commands you used to compile rife-ncnn-vulkan successfully? It would also be great if you can upload a zip of the binary. Thank you.

bukhalmae145 commented 8 months ago

If frames are missing, increase the sensitivity (use a lower number). Also, sorry to ask this, but can you provide a script of the commands you used to compile rife-ncnn-vulkan successfully? It would also be great if you can upload a zip of the binary. Thank you.

no i didnt compiled it but i just used the one you gave me and just installed the old version on vulkan sdk. And I tried to follow your instructions to remove weird transition frames, but I'm confused. How do I remove weird frames by extracting weird frames? Should I delete them all one by one? Also, what about the lack of timecode after I remove frames?

TNTwise commented 8 months ago

You should be able to drag every image in the transitions folder into the output frames folder after a render is finished

TNTwise commented 8 months ago

I recompiled it successfully, and the binary is 26mb, https://github.com/TNTwise/Rife-Vulkan-Models/releases/download/models/macos-rife-ncnn-binary.zip if you want to use that instead of the current binary + vulkansdk, it should be better.

bukhalmae145 commented 8 months ago

성공적으로 다시 컴파일했는데 바이너리는 26MB입니다. https://github.com/TNTwise/Rife-Vulkan-Models/releases/download/models/macos-rife-ncnn-binary.zip 대신 사용하려면 현재 바이너리 + vulkansdk가 더 좋을 것입니다.

Thank you.

bukhalmae145 commented 8 months ago

ou should be able to drag every image in the transitions folder into the output frames folder after a render is finished

I'm sorry but I really can't understand your instructions... I'm stuck after using the ffmpeg command you recommended. Don't know what to do after extracting the weird frames..

TNTwise commented 8 months ago

You should be able to take the frames from the transition folder and put them into the output folder after running the rife render, it should remove the weird transition effects generated.

bukhalmae145 commented 8 months ago

You should be able to take the frames from the transition folder and put them into the output folder after running the rife render, it should remove the weird transition effects generated.

Should I use the transition command with the input video?

TNTwise commented 8 months ago

You should be able to take the frames from the transition folder and put them into the output folder after running the rife render, it should remove the weird transition effects generated.

Should I use the transition command with the input video?

Yes

bukhalmae145 commented 8 months ago

You should be able to take the frames from the transition folder and put them into the output folder after running the rife render, it should remove the weird transition effects generated.

Should I use the transition command with the input video?

Yes

But just moving extracted frames to the output_frames directory doesn't removes the weird frames.. :(

TNTwise commented 8 months ago

Try a higher sensitivity, (use 1 or 2 when selecting ffmpeg sensitivity) transition detection will not always be accurate.

bukhalmae145 commented 8 months ago

Try a higher sensitivity, (use 1 or 2 when selecting ffmpeg sensitivity) transition detection will not always be accurate.

No I meant the extraction is decent, but just dragging extracted frames to the RIFE-rendered frames directory doesn't removes the weird frames.

TNTwise commented 8 months ago

Check if the frame numbers line up, I've tested it and my math seems right, but there could always be an error.

bukhalmae145 commented 8 months ago

Check if the frame numbers line up, I've tested it and my math seems right, but there could always be an error.

the frame number of extracted frame and the number of weird frame is not ideal. I will update the screenshout.

Screenshot 2024-01-13 at 3 20 38 PM
TNTwise commented 8 months ago

Oh. The script was made for Linux, so somewhere it is not moving the frames to the numbers they should be moved to, does the script produce any logs?

bukhalmae145 commented 8 months ago

Oh. The script was made for Linux, so somewhere it is not moving the frames to the numbers they should be moved to, does the script produce any logs?

ffmpeg -i 00010.m2ts -filter_complex "select='gt(scene,0.3)',metadata=print" -vsync vfr transition/%08d.png

Screenshot 2024-01-13 at 3 24 23 PM
TNTwise commented 8 months ago

I think I understand now, are you using the ffmpeg command but not this script:

import os
import subprocess
import re
import math
def extract(Image_Type,input_file,SceneChangeDetection,times,amount_of_zeros=8):
    os.chdir(os.path.dirname(file))
    os.system('rm -rf transitions')
    os.mkdir('transitions/') 
    if Image_Type != '.webp':
                    ffmpeg_cmd = f'ffmpeg -i "{input_file}" -filter_complex "select=\'gt(scene\,{SceneChangeDetection})\',metadata=print" -vsync vfr -q:v 1 "transitions/%07d.{Image_Type}"' 
    else:
                    ffmpeg_cmd = f'ffmpeg -i "{input_file}" -filter_complex "select=\'gt(scene\,{SceneChangeDetection})\',metadata=print" -vsync vfr -q:v 100 "transitions/%07d.png"'

    output = subprocess.check_output(ffmpeg_cmd, shell=True, stderr=subprocess.STDOUT)

    output_lines = output.decode("utf-8").split("\n")
    timestamps = []

    ffprobe_cmd = f"ffprobe -v error -select_streams v -of default=noprint_wrappers=1:nokey=1 -show_entries stream=r_frame_rate {input_file}"
    result = subprocess.check_output(ffprobe_cmd, shell=True).decode("utf-8")

    match = re.match(r'(\d+)/(\d+)', result)

    numerator, denominator = match.groups()
    fps = int(numerator) / int(denominator)

    fps_value = int(numerator) / int(denominator) if match else None
    for line in output_lines:
                if "pts_time" in line:
                    timestamp = str(line.split("_")[3])
                    timestamp = str(timestamp.split(':')[1])
                    timestamps.append(math.ceil(round(float(timestamp)*float(fps_value))*times))
    transitions = os.listdir('transitions/')
    for iteration,i in enumerate(transitions):

        if Image_Type != '.webp':
                    os.system(f'mv transitions/{str(str(iteration+1).zfill(7))}.{Image_Type} transitions/{timestamps[iteration]}.{Image_Type}')
        else:
                    os.system(f'mv transitions/{str(str(iteration+1).zfill(7))}.png transitions/{timestamps[iteration]}.{Image_Type}')
    for i in timestamps:
            for j in range(math.ceil(times)):
                    os.system(f'cp transitions/{i}.{Image_Type} transitions/{str(int(i)-j).zfill(amount_of_zeros)}.{Image_Type}' )
            os.remove(f'transitions/{i}.{Image_Type}')
image = input('Please pick an image type:\n1:PNG\n2:JPG\n3:WEBP\n(Please pick 1,2 or 3): ')
if image == '1':
        image = 'png'
elif image == '2':
        image = 'jpg'
elif image == '3':
        image = 'webp'
else:
    print('Invalid answer!')
    exit()

file = input('\nPlease paste input file location here: ')
if os.path.isfile(file) == False:
        print('Not a file!')
        exit()

sensativity=input('\nPlease enter the sensitivity (0-9)\n(0 means most sensitive, meaning it will detect the most frames as scene changes)\n(9 being the least sensitive, meaning it will detect the least amount of frames as scene changes.)\n: ')
try:
    if int(sensativity) > 9 or int(sensativity) < 0:
            print('invalid sensitivity')
            exit()
except:
        print('Not an integer!')
        exit()
try:
    timestep = input('\nPlease enter a timestep (not the number of frames, just the multiplier): ')
except:
        print('Not a float!')
        exit()
try:
    if len(timestep) > 1:
        timestep = float(timestep)
    else:
        timestep=int(timestep)
except:
        print('invalid!')
        exit()
try: 
    amount_of_zeros = int(input('\nPlease enter the num you put in for the amount of zeros per frame, default = %08d\nif you changed this value in extraction of frames, please put that value here\nif not, just skip this. '))
except:
        amount_of_zeros=8
extract(image,file,f'0.{sensativity}',timestep,amount_of_zeros)
bukhalmae145 commented 8 months ago

I think I understand now, are you using the ffmpeg command but not this script:

import os
import subprocess
import re
import math
def extract(Image_Type,input_file,SceneChangeDetection,times,amount_of_zeros=8):
    os.chdir(os.path.dirname(file))
    os.system('rm -rf transitions')
    os.mkdir('transitions/') 
    if Image_Type != '.webp':
                    ffmpeg_cmd = f'ffmpeg -i "{input_file}" -filter_complex "select=\'gt(scene\,{SceneChangeDetection})\',metadata=print" -vsync vfr -q:v 1 "transitions/%07d.{Image_Type}"' 
    else:
                    ffmpeg_cmd = f'ffmpeg -i "{input_file}" -filter_complex "select=\'gt(scene\,{SceneChangeDetection})\',metadata=print" -vsync vfr -q:v 100 "transitions/%07d.png"'

    output = subprocess.check_output(ffmpeg_cmd, shell=True, stderr=subprocess.STDOUT)

    output_lines = output.decode("utf-8").split("\n")
    timestamps = []

    ffprobe_cmd = f"ffprobe -v error -select_streams v -of default=noprint_wrappers=1:nokey=1 -show_entries stream=r_frame_rate {input_file}"
    result = subprocess.check_output(ffprobe_cmd, shell=True).decode("utf-8")

    match = re.match(r'(\d+)/(\d+)', result)

    numerator, denominator = match.groups()
    fps = int(numerator) / int(denominator)

    fps_value = int(numerator) / int(denominator) if match else None
    for line in output_lines:
                if "pts_time" in line:
                    timestamp = str(line.split("_")[3])
                    timestamp = str(timestamp.split(':')[1])
                    timestamps.append(math.ceil(round(float(timestamp)*float(fps_value))*times))
    transitions = os.listdir('transitions/')
    for iteration,i in enumerate(transitions):

        if Image_Type != '.webp':
                    os.system(f'mv transitions/{str(str(iteration+1).zfill(7))}.{Image_Type} transitions/{timestamps[iteration]}.{Image_Type}')
        else:
                    os.system(f'mv transitions/{str(str(iteration+1).zfill(7))}.png transitions/{timestamps[iteration]}.{Image_Type}')
    for i in timestamps:
            for j in range(math.ceil(times)):
                    os.system(f'cp transitions/{i}.{Image_Type} transitions/{str(int(i)-j).zfill(amount_of_zeros)}.{Image_Type}' )
            os.remove(f'transitions/{i}.{Image_Type}')
image = input('Please pick an image type:\n1:PNG\n2:JPG\n3:WEBP\n(Please pick 1,2 or 3): ')
if image == '1':
        image = 'png'
elif image == '2':
        image = 'jpg'
elif image == '3':
        image = 'webp'
else:
    print('Invalid answer!')
    exit()

file = input('\nPlease paste input file location here: ')
if os.path.isfile(file) == False:
        print('Not a file!')
        exit()

sensativity=input('\nPlease enter the sensitivity (0-9)\n(0 means most sensitive, meaning it will detect the most frames as scene changes)\n(9 being the least sensitive, meaning it will detect the least amount of frames as scene changes.)\n: ')
try:
    if int(sensativity) > 9 or int(sensativity) < 0:
            print('invalid sensitivity')
            exit()
except:
        print('Not an integer!')
        exit()
try:
    timestep = input('\nPlease enter a timestep (not the number of frames, just the multiplier): ')
except:
        print('Not a float!')
        exit()
try:
    if len(timestep) > 1:
        timestep = float(timestep)
    else:
        timestep=int(timestep)
except:
        print('invalid!')
        exit()
try: 
    amount_of_zeros = int(input('\nPlease enter the num you put in for the amount of zeros per frame, default = %08d\nif you changed this value in extraction of frames, please put that value here\nif not, just skip this. '))
except:
        amount_of_zeros=8
extract(image,file,f'0.{sensativity}',timestep,amount_of_zeros)

No i used ffmpeg

TNTwise commented 8 months ago

Use the python script, create a new file call transitions.py, and then in your terminal run python3 transitions.py, you will need python installed for it to work.

Note, it will delete any folder called transitions, so if you have a folder or file called transitions make a backup of it or rename it.

bukhalmae145 commented 8 months ago

Use the python script, create a new file call transitions.py, and then in your terminal run python3 transitions.py, you will need python installed for it to work.

I trtied to use the script above but i'm stuck here.

Screenshot 2024-01-13 at 3 29 37 PM
TNTwise commented 8 months ago

That's the multiplier, say for the amount of frames you told rife to render out like -n 400, and you started with 200 frames, the multiplier would be 2. That is what you would put there. It's just asking the multiplier in which you interpolated.

bukhalmae145 commented 8 months ago

That's the multiplier, say for the amount of frames you told rife to render out like -n 400, and you started with 200 frames, the multiplier would be 2. That is what you would put there. It's just asking the multiplier in which you interpolated.

Thank you so much!! I appreciate you!

TNTwise commented 8 months ago

When pasting in the video, paste the full file directory. For me on Linux it would be something like /home/pax/video.mp4.

bukhalmae145 commented 8 months ago

When pasting in the video, paste the full file directory. For me on Linux it would be something like /home/pax/video.mp4.

Yeah it worked!

bukhalmae145 commented 8 months ago

When pasting in the video, paste the full file directory. For me on Linux it would be something like /home/pax/video.mp4.

can i use the code above with image frames? (png/webp)

TNTwise commented 8 months ago

For inputting image frames? Sadly no, ffmpeg relies on a stream to detect transitions and individual files don't have that.

bukhalmae145 commented 8 months ago

For inputting image frames? Sadly no, ffmpeg relies on a stream to detect transitions and individual files don't have that.

But with this code, it was able to do that. ffmpeg -i input_frames/%08d.png -filter_complex "select='gt(scene,0.3)',metadata=print" -vsync vfr transition/%08d.png

TNTwise commented 8 months ago

If it works then that's good, last time I checked it did not function.

TNTwise commented 8 months ago

Yes, you could, sadly I'm on my phone and can't modify it right now.

bukhalmae145 commented 8 months ago

Yes, you could, sadly I'm on my phone and can't modify it right now.

What does time step option does?

TNTwise commented 8 months ago

It is just the frame multiplier, take the final fps of the video divided by the initial fps, its so the algorithm knows what frame to assign to the images.

TNTwise commented 8 months ago

Marking closed due to inactivity, if you need more help regarding this, you can always create an issue here.