the-database / mpv-upscale-2x_animejanai

Real-time anime upscaling to 4k in mpv with Real-ESRGAN compact models
Other
325 stars 6 forks source link

I just made the setting to hold the frame drop #9

Open foxbox93 opened 1 year ago

foxbox93 commented 1 year ago
test10

This time, I've made optimization after a lot of trial and error.

The amendments are as follows

1. Add "scale_to_540, 675, 810"

Upscale_twice or 60 frame videos above 1080p almost unconditionally get frame drops. To find a solution to this, I tried to manipulate the "animejanai_v2.conf" data and find a figure that prevents frame drops while preserving the most quality.

In conclusion, I think the frame drop was the best when it was "resize_height_before_first_2x=540", and I started to make a code based on this

scale_to_810: 1440p60 upscaling scale_to_675 : 1080p60 Upscaling scale_to_540 : 809p30~540p30 upscale_twice scale_to_1080: 1079p to 810p upscaling

2. Different engine settings for different situations

Compact / UltraCompact/ SuperUltraCompact (strong.v1)

I used all three of these to make the best combination

30 frames 2159p - 810p >>> resize1080 + UltraCompact >>> 2160p 809p - 540p >>> resize540 + SuperUltraCompact + SpuerUltraCompact >>> 2160p 539p - 1p >>> Compact + UltraCompact >>> 2156p - 4p

60 frames 2159p - 1081p >>> resize810 + SuperUltraCompact >>> 1620p 1080p - 810p >>> resize675 + UltraCompact >>> 1350p 809p - 540p >>> UltraCompact >>> 1618p - 1082p 539p - 1p >>> Compact >>> 1078p - 2p

I am very pleased with this result

rife_cuda.py also I've tried this too. but I think it's going to be hard.

Below is the code I used

I'll mark ###new### for the part where I fixed the code

animejanai_v2.py

import vapoursynth as vs
import os
import subprocess
import logging
import configparser
import sys
from logging.handlers import RotatingFileHandler

sys.path.append(os.path.dirname(os.path.abspath(__file__)))

import rife_cuda
import animejanai_v2_config
# import gmfss_cuda

# trtexec num_streams
TOTAL_NUM_STREAMS = 4

core = vs.core
core.num_threads = 4  # can influence ram usage

plugin_path = os.path.join(os.path.dirname(os.path.abspath(__file__)),
                           r"..\..\vapoursynth64\plugins\vsmlrt-cuda")
model_path = os.path.join(plugin_path, r"..\models\animejanai")

formatter = logging.Formatter(fmt='%(asctime)s %(levelname)-8s %(message)s',
                              datefmt='%Y-%m-%d %H:%M:%S')
logger = logging.getLogger('animejanai_v2')

config = {}

def init_logger():
    global logger
    logger.setLevel(logging.DEBUG)
    rfh = RotatingFileHandler(os.path.join(os.path.dirname(os.path.abspath(__file__)), 'animejanai_v2.log'),
                              mode='a', maxBytes=1 * 1024 * 1024, backupCount=2, encoding=None, delay=0)
    rfh.setFormatter(formatter)
    rfh.setLevel(logging.DEBUG)
    logger.addHandler(rfh)

# model_type: HD or SD
# binding: 1 through 9
def find_model(model_type, binding):
    section_key = f'slot_{binding}'
    key = f'{model_type.lower()}_model'

    if section_key in config:
        if key in config[section_key]:
            return config[section_key][key]
    return None

def create_engine(onnx_name):
    onnx_path = os.path.join(model_path, f"{onnx_name}.onnx")
    if not os.path.isfile(onnx_path):
        raise FileNotFoundError(onnx_path)

    engine_path = os.path.join(model_path, f"{onnx_name}.engine")

    subprocess.run([os.path.join(plugin_path, "trtexec"), "--fp16", f"--onnx={onnx_path}",
                    "--minShapes=input:1x3x8x8", "--optShapes=input:1x3x1080x1920", "--maxShapes=input:1x3x1080x1920",
                    f"--saveEngine={engine_path}", "--tacticSources=+CUDNN,-CUBLAS,-CUBLAS_LT"],
                   cwd=plugin_path)

def scale_to_1080(clip, w=1920, h=1080):
    if clip.width / clip.height > 16 / 9:
        prescalewidth = w
        prescaleheight = w * clip.height / clip.width
    else:
        prescalewidth = h * clip.width / clip.height
        prescaleheight = h
    return vs.core.resize.Bicubic(clip, width=prescalewidth, height=prescaleheight)

###new###
def scale_to_810(clip, w=1440, h=810):
   if clip.width / clip.height > 16 / 9:
      prescalewidth = w
      prescaleheight = w * clip.height / clip.width
   else:
      prescalewidth = h * clip.width / clip.height
      prescaleheight = h
   return vs.core.resize.Bicubic(clip, width=prescalewidth, height=prescaleheight)

###new###
def scale_to_675(clip, w=1200, h=675):
   if clip.width / clip.height > 16 / 9:
      prescalewidth = w
      prescaleheight = w * clip.height / clip.width
   else:
      prescalewidth = h * clip.width / clip.height
      prescaleheight = h
   return vs.core.resize.Bicubic(clip, width=prescalewidth, height=prescaleheight)

###new###
def scale_to_540(clip, w=960, h=540):
   if clip.width / clip.height > 16 / 9:
      prescalewidth = w
      prescaleheight = w * clip.height / clip.width
   else:
      prescalewidth = h * clip.width / clip.height
      prescaleheight = h
   return vs.core.resize.Bicubic(clip, width=prescalewidth, height=prescaleheight)

###new###
def upscale2x(clip, sd_engine_name, hd_engine_name, shd_engine_name, num_streams):
    if clip.height == 675 or clip.width == 1200:
        engine_name = hd_engine_name
    else:
        if (clip.height < 540 or clip.width < 960):
            engine_name = sd_engine_name
        if (clip.height == 1080 or clip.width == 1920) or ((clip.height > 540 and clip.width > 960) and (clip.height < 1080 or clip.width < 1920)):
            engine_name = hd_engine_name
        if (clip.height == 540 or clip.width == 960) or (clip.height == 810 or clip.width == 1440):
            engine_name = shd_engine_name
    if engine_name is None:
        return clip
    engine_path = os.path.join(model_path, f"{engine_name}.engine")

    message = f"upscale2x: scaling 2x from {clip.width}x{clip.height} with engine={engine_name}; num_streams={num_streams}"
    logger.debug(message)
    print(message)

    if not os.path.isfile(engine_path):
        create_engine(engine_name)

    return core.trt.Model(
        clip,
        engine_path=engine_path,
        num_streams=num_streams,
    )
###new###
def upscale22x(clip, hd_engine_name, shd_engine_name, num_streams):
    if clip.height == 1080 or clip.width == 1920:
        engine_name = shd_engine_name
    else:
        if clip.height < 1080 or clip.width < 1920:
            engine_name = hd_engine_name
    if engine_name is None:
        return clip
    engine_path = os.path.join(model_path, f"{engine_name}.engine")

    message = f"upscale22x: scaling 2x from {clip.width}x{clip.height} with engine={engine_name}; num_streams={num_streams}"
    logger.debug(message)
    print(message)

    if not os.path.isfile(engine_path):
        create_engine(engine_name)

    return core.trt.Model(
        clip,
        engine_path=engine_path,
        num_streams=num_streams,
    )

def run_animejanai(clip, sd_engine_name, hd_engine_name, shd_engine_name, container_fps, resize_factor_before_first_2x,
                   resize_height_before_first_2x, resize_720_to_1080_before_first_2x, do_upscale,
                   resize_to_1080_before_second_2x, upscale_twice, use_rife):
    if do_upscale:
        colorspace = "709"
        colorlv = clip.get_frame(0).props._ColorRange
        fmt_in = clip.format.id

        if clip.height < 720 or clip.width < 1280:
            colorspace = "170m"

        if resize_height_before_first_2x != 0:
            resize_factor_before_first_2x = 1

        try:
            # try half precision first
            clip = vs.core.resize.Bicubic(clip, format=vs.RGBH, matrix_in_s=colorspace,
                                          width=clip.width/resize_factor_before_first_2x,
                                          height=clip.height/resize_factor_before_first_2x)

            clip = run_animejanai_upscale(clip, sd_engine_name, hd_engine_name, shd_engine_name, container_fps, resize_factor_before_first_2x,
                                          resize_height_before_first_2x, resize_720_to_1080_before_first_2x, do_upscale,
                                          resize_to_1080_before_second_2x, upscale_twice, use_rife, colorspace, colorlv,
                                          fmt_in)
        except:
            clip = vs.core.resize.Bicubic(clip, format=vs.RGBS, matrix_in_s=colorspace,
                                          width=clip.width/resize_factor_before_first_2x,
                                          height=clip.height/resize_factor_before_first_2x)
            clip = run_animejanai_upscale(clip, sd_engine_name, hd_engine_name, shd_engine_name, container_fps, resize_factor_before_first_2x,
                                          resize_height_before_first_2x, resize_720_to_1080_before_first_2x, do_upscale,
                                          resize_to_1080_before_second_2x, upscale_twice, use_rife, colorspace, colorlv,
                                          fmt_in)
            ###new###
    if use_rife:
        clip = rife_cuda.rife(clip, clip.width, clip.height, container_fps)

    clip.set_output()

    ###new###
def run_animejanai_upscale(clip, sd_engine_name, hd_engine_name, shd_engine_name, container_fps, resize_factor_before_first_2x,
                          resize_height_before_first_2x, resize_720_to_1080_before_first_2x, do_upscale,
                          resize_to_1080_before_second_2x, upscale_twice, use_rife, colorspace, colorlv, fmt_in):
    ###new###
    if resize_height_before_first_2x == 540:
        if (clip.height >= 540 or clip.width >= 960) and container_fps >= 45:
            if (clip.height <= 1080 or clip.width <= 1920):
                clip = scale_to_675(clip)
            else:
                clip = scale_to_810(clip)
        else:
            if (clip.height >= 540 or clip.width >= 960) and clip.height < 810 and clip.width < 1440 and container_fps < 45:
                clip = scale_to_540(clip)
            if (clip.height >= 810 or clip.width >= 1440) and clip.height < 2160 and clip.width < 3840:
                clip = scale_to_1080(clip)

    # if not 540, error occurred at upscale2x, upscale22x
    if resize_height_before_first_2x != 540 and resize_height_before_first_2x != 0 :
        clip = scale_to_1080(clip, resize_height_before_first_2x * 16 / 9, resize_height_before_first_2x)

    # pre-scale 720p or higher to 1080 > NO
    if resize_720_to_1080_before_first_2x:
        if (clip.height >= 810 or clip.width >= 1440) and clip.height < 1080 and clip.width < 1920:
            clip = scale_to_1080(clip)

    num_streams = TOTAL_NUM_STREAMS
    if upscale_twice and ( clip.height <= 540 or clip.width <= 960 ) and container_fps < 45:
        num_streams /= 2

    # upscale 2x
    clip = upscale2x(clip, sd_engine_name, hd_engine_name, shd_engine_name, num_streams)

    # upscale 2x again if necessary
    if upscale_twice and ( clip.height <= 1080 or clip.width <= 1920 ) and container_fps < 45:
        # downscale down to 1080 if first 2x went over 1080,
        # or scale up to 1080 if enabled >> NO
        if resize_to_1080_before_second_2x and ( clip.height > 720 or clip.width > 1280):
            clip = scale_to_1080(clip)

        # upscale 2x again
        clip = upscale22x(clip, hd_engine_name, shd_engine_name, num_streams)

    fmt_out = fmt_in
    if fmt_in not in [vs.YUV410P8, vs.YUV411P8, vs.YUV420P8, vs.YUV422P8, vs.YUV444P8, vs.YUV420P10, vs.YUV422P10,
                      vs.YUV444P10]:
        fmt_out = vs.YUV420P10

    return vs.core.resize.Bicubic(clip, format=fmt_out, matrix_s=colorspace, range=1 if colorlv == 0 else None)

# keybinding: 1-9
def run_animejanai_with_keybinding(clip, container_fps, keybinding):
    sd_engine_name = find_model("SD", keybinding)
    hd_engine_name = find_model("HD", keybinding)
    shd_engine_name = find_model("SHD", keybinding)
    section_key = f'slot_{keybinding}'
    do_upscale = config[section_key].get(f'upscale_2x', True)
    upscale_twice = config[section_key].get(f'upscale_4x', True)
    use_rife = config[section_key].get(f'rife', True)
    resize_720_to_1080_before_first_2x = config[section_key].get(f'resize_720_to_1080_before_first_2x', True)
    resize_factor_before_first_2x = config[section_key].get(f'resize_factor_before_first_2x', 1)
    resize_height_before_first_2x = config[section_key].get(f'resize_height_before_first_2x', 0)
    resize_to_1080_before_second_2x = config[section_key].get(f'resize_to_1080_before_second_2x', True)

    if do_upscale:
        if sd_engine_name is None and hd_engine_name is None and shd_engine_name is None:
            raise FileNotFoundError(
                f"2x upscaling is enabled but no SD model and HD model defined for slot {keybinding}. Expected at least one of SD or HD model to be specified with sd_model or hd_model in animejanai.conf.")
        ###new###
    if (clip.height < 2160 or clip.width < 3840) and container_fps < 100:
        run_animejanai(clip, sd_engine_name, hd_engine_name, shd_engine_name, container_fps, resize_factor_before_first_2x,
                   resize_height_before_first_2x, resize_720_to_1080_before_first_2x, do_upscale,
                   resize_to_1080_before_second_2x, upscale_twice, use_rife)

def init():
    global config
    config = animejanai_v2_config.read_config()
    if config['global']['logging']:
        init_logger()

init()

animejanai_v2_1.vpy

import sys, os
sys.path.append(os.path.dirname(os.path.abspath(__file__)))

import animejanai_v2

animejanai_v2.run_animejanai_with_keybinding(video_in, container_fps, 1)

animejanai_v2.conf

[slot_1]

SD_model=2x_AnimeJaNai_Strong_V1_Compact_net_g_120000

HD_model=2x_AnimeJaNai_Strong_V1_UltraCompact_net_g_100000

SHD_model=2x_AnimeJaNai_Strong_V1_SuperUltraCompact_net_g_100000

resize_factor_before_first_2x=1

resize_height_before_first_2x=540

resize_720_to_1080_before_first_2x=no

upscale_2x=yes

upscale_4x=yes

resize_to_1080_before_second_2x=no

rife=no
hooke007 commented 1 year ago

vs's built-in resize is inefficient. And inserting multiple cpu-filter make it worse. The best solution is develop a new plugin or model to fusion pre-resizing into the process of janai's upscaling. Just like what rife_v2 had done.

the-database commented 1 year ago

Thanks for sharing. I pushed some updates to the v2 branch based on your previous comments to increase its flexibility. The format of the conf file has been completely redone and I believe it's able to handle all of the scenarios you want without requiring any custom code changes.

If you'd like, back up your existing files, download the latest scripts from the v2 branch and try setting it up to see if it can do what you want. Feel free to leave any feedback on the latest changes.

I have also started working on benchmarking scripts which will be helpful in setting up the configurations for animejanai. I have been collecting benchmark results here (https://github.com/the-database/mpv-upscale-2x_animejanai/wiki/Benchmarks) but I haven't tested the benchmarking script with the latest changes so it may not be working yet. Once it's working you'll be able to run the benchmark all .bat file which will run a set of benchmarks and generate a benchmark.txt which contains a markdown table which you can paste the results of to the wiki.

EDIT: The benchmark script has been fixed and is working again. Please see the wiki for detailed instructions in you're interested in running them.

foxbox93 commented 1 year ago

Thanks for sharing. I pushed some updates to the v2 branch based on your previous comments to increase its flexibility. The format of the conf file has been completely redone and I believe it's able to handle all of the scenarios you want without requiring any custom code changes.

I've read the changes. And I'll give you a little bit more feedback here.

I think it's better to set it based on the number of pixels rather than the height. As an extreme example, I think vertical videos can be handled unintentionally. I think the conf needs an additional code to properly process videos of various specifications such as "scale_to_1080" you designed.

Derive the number of pixels based on 16:9 video

(xxxxp) = xxxx xxxx 16 / 9 UHD(2160p) = 3840 2160 = 8,294,400 pixel QHD(1440p) = 2560 1440 = 3,686,400 pixel FHD(1080p) = 1920 1080 = 2,073,600 pixel HD(720p) = 1280 720 = 921,600 pixel (0p) = 0 pixel

animejanai_v2.conf

chain_1_above_this_height=720
chain_1_below_this_height=1080
# Please enter the height based on the 16:9 video
# For example, if you enter 720 and 1080, this will be treated like 720 < x <= 1080

animejanai_v2_config.py

'min_hight': float(flat_conf[section].get(f'chain_{chain}_above_this_height', 0)),
'max_hight': float(flat_conf[section].get(f'chain_{chain}_below_this_height', "inf")),
...

min_pixel = min_hight * math.ceil(min_hight * 16 / 9) + 1
max_pixel = max_hight * math.ceil(max_hight * 16 / 9)

#read like min_pixel=((720*720)/9*16+1) & max_pixel=((1080*1080)/9*16)
#If the user entered the same number or min max position by changing it, 
#It will need a conditional statement that changes the two positions.

However, I think the speed may be disrupted by a large number of pixel counts. In the code I shared in the text, the width range was considered for the resize section, but this is also complicated. I'll give you my opinion if I have a neat and better idea.

foxbox93 commented 1 year ago

I have also started working on benchmarking scripts which will be helpful in setting up the configurations for animejanai. I have been collecting benchmark results here (https://github.com/the-database/mpv-upscale-2x_animejanai/wiki/Benchmarks) but I haven't tested the benchmarking script with the latest changes so it may not be working yet. Once it's working you'll be able to run the benchmark all .bat file which will run a set of benchmarks and generate a benchmark.txt which contains a markdown table which you can paste the results of to the wiki.

Tried to benchmark but failed. (I still lack knowledge of coding. I tried pressing "all.bat" but there was no response and I didn't understand where to fix the 'video_path' in .vpy. I'm sorry about this.)

Instead, I try every possible combination depending on the situation I recorded the image quality status, cpu, gpu data, frame drop, etc The following is the best option in my situation.

EDIT(04.25)**

i5 13600k + Geforce RTX 4070 ti

/// C=SD=Compact_v1_strong UC=HD=UltraCompact_v1_strong SUC=SHD=SuperUltraCompact_v1_strong ///

30fps over 3,000,000 pixel (1440p over) >>> resize720 + C + resize1080 + SUC under 3,000,000 pixel (1080p, 720p) >>> resize540 + UC + SUC under 50,000 pixel (480p under) >>> UC + SUC

60fps over 1,500,000 pixel (1080p over) >>> resize720 + UC under 1,500,000 pixel (720p) >>> UC under 50,000 pixel (480p under) >>> C

1 2

EDIT(04.25)** : for example simple coding

def upscale2x(clip, sd_engine_name, hd_engine_name, shd_engine_name, container_fps, num_streams):
    if container_fps < 40 :
        if (clip.height==720 or clip.width == 1280): #1440p -> 720p -> 1440p -> 1080p -> 2160p
            engine_name = sd_engine_name
        else:
            engine_name = hd_engine_name
    else:
        if (clip.height * clip.width >= 500000):
            engine_name = hd_engine_name
        else:
            engine_name = sd_engine_name
    if engine_name is None:
        return clip
    engine_path = os.path.join(model_path, f"{engine_name}.engine")

    message = f"upscale2x: scaling 2x from {clip.width}x{clip.height} with engine={engine_name}; num_streams={num_streams}"
    logger.debug(message)
    print(message)

    if not os.path.isfile(engine_path):
        create_engine(engine_name)

    return core.trt.Model(
        clip,
        engine_path=engine_path,
        num_streams=num_streams,
    )

def upscale22x(clip, sd_engine_name, hd_engine_name, shd_engine_name, num_streams):
    if (clip.height * clip.width >= 30000):
        engine_name = shd_engine_name
    else:
        engine_name = hd_engine_name
    if engine_name is None:
        return clip
    engine_path = os.path.join(model_path, f"{engine_name}.engine")

    message = f"upscale22x: scaling 2x from {clip.width}x{clip.height} with engine={engine_name}; num_streams={num_streams}"
    logger.debug(message)
    print(message)

    if not os.path.isfile(engine_path):
        create_engine(engine_name)

    return core.trt.Model(
        clip,
        engine_path=engine_path,
        num_streams=num_streams,
    )
def run_animejanai_upscale(clip, sd_engine_name, hd_engine_name, shd_engine_name, container_fps, resize_factor_before_first_2x,
                          resize_height_before_first_2x, resize_720_to_1080_before_first_2x, do_upscale,
                          resize_to_1080_before_second_2x, upscale_twice, use_rife, colorspace, colorlv, fmt_in):

    num_streams = TOTAL_NUM_STREAMS

    if container_fps > 40:
        if (clip.height * clip.width >= 1500000) and (clip.height * clip.width < 8294400):
            clip=scale_to_720(clip)
            clip=upscale2x(clip, sd_engine_name, hd_engine_name, shd_engine_name, container_fps, num_streams)
        if (clip.height * clip.width >= 500000) and ((clip.height * clip.width < 1500000)):
            clip=upscale2x(clip, sd_engine_name, hd_engine_name, shd_engine_name, container_fps, num_streams)
        if (clip.height * clip.width < 500000):
            clip=upscale2x(clip, sd_engine_name, hd_engine_name, shd_engine_name, container_fps, num_streams)
    else:
        num_streams /= 2
        if (clip.height * clip.width >= 3000000) and (clip.height * clip.width < 8294400):
            clip=scale_to_720(clip)
            clip=upscale2x(clip, sd_engine_name, hd_engine_name, shd_engine_name, container_fps, num_streams)
            clip=scale_to_1080(clip)
            clip=upscale22x(clip, sd_engine_name, hd_engine_name, shd_engine_name, num_streams)
        if (clip.height * clip.width >= 500000) and ((clip.height * clip.width < 3000000)):
            clip=scale_to_540(clip)
            clip=upscale2x(clip, sd_engine_name, hd_engine_name, shd_engine_name, container_fps, num_streams)
            clip=upscale22x(clip, sd_engine_name, hd_engine_name, shd_engine_name, num_streams)
        if (clip.height * clip.width < 500000):
            clip=upscale2x(clip, sd_engine_name, hd_engine_name, shd_engine_name, container_fps, num_streams)
            clip=upscale22x(clip, sd_engine_name, hd_engine_name, shd_engine_name, num_streams)

    fmt_out = fmt_in
    if fmt_in not in [vs.YUV410P8, vs.YUV411P8, vs.YUV420P8, vs.YUV422P8, vs.YUV444P8, vs.YUV420P10, vs.YUV422P10,
                      vs.YUV444P10]:
        fmt_out = vs.YUV420P10

    return vs.core.resize.Bicubic(clip, format=fmt_out, matrix_s=colorspace, range=1 if colorlv == 0 else None)
the-database commented 1 year ago

Tried to benchmark but failed. (I still lack knowledge of coding. I tried pressing "all.bat" but there was no response and I didn't understand where to fix the 'video_path' in .vpy. I'm sorry about this.)

I created a new issue for benchmarking. Can we continue the benchmarking discussion there if you don't mind? https://github.com/the-database/mpv-upscale-2x_animejanai/issues/10

I think it's better to set it based on the number of pixels rather than the height. As an extreme example, I think vertical videos can be handled unintentionally. I think the conf needs an additional code to properly process videos of various specifications such as "scale_to_1080" you designed.

That's true. I pushed an update to use video resolution instead of video height in the animejanai conf file. In the code, the resolution isn't used directly, but the total number of pixels is calculated and that is what's used in the comparison.

I believe the resize_height_before_upscale property allows you to basically use the scale_to_1080 function as needed? Do you have an example where resize_height_before_upscale doesn't give you enough control?

foxbox93 commented 1 year ago

I believe the resize_height_before_upscale property allows you to basically use the scale_to_1080 function as needed? Do you have an example where resize_height_before_upscale doesn't give you enough control?

I think we need to consider two parts for the size_height_before_upscale property.

  1. This is also scaling considering only the height.

For example, suppose I use the size_height_before_upscale function to change (height: 2160 -1081) to 1080 and (height: 1080 - 0) to 720. Then there are 1920 1080 and 1080 1920 which hold the same pixels. The result is 1280 720 and 607.5 1080, respectively.

As you can see, the number of pixels changed differently, and the decimal point occurred.

At first, I was going to use only the scale_to_1080 that you created, and I excluded size_height_before_upscale.

However, it seems that correcting based on pixels will solve this problem. It seems that size_pixel_before_upscale will be able to cope more flexibly than I set.

  1. User convenience

The size_height_before_upscale feature is a structure that the user enters while testing it himself.

My guess is that if you upscaled at a 16:9 ratio, you'll have to divide it by an integer multiple of 9.

If the user enters 700, the quality may be out of order.

I think this part should be specified by the creator rather than input by the user.

So I subtracted 135 from 1080 (that's a number close to 100) and the number I set is below. 540 / 675 / 810 / 945 / 1080

**EDIT : (or 540/630/720/810/900/990/1080 also not bad)

However, this is also groundless, so your judgment may be correct, so please refer to it.

Lastly

Regarding the Vanch marking test, I will check it again later when I have time in #10. Thank you for your confirmation.

the-database commented 1 year ago

I see what you're saying. Maybe this can be handled using a new resize_resolution_before_upscale option, which would behave similarly to some of your suggestions. I'm not sure about the name of this option, it might be misleading and I might change it if I think of a better name.

It would not resize the video to the exact resolution specified. Instead, it would calculate the total pixels specified, and then resize the original video to the largest size possible, with total pixels less than or equal to the total pixels specified.

For example, if resize_resolution_before_upscale=1280x720 (which has 921600 total pixels):

This setting would allow you to do a resize and guarantee the total number of pixels is within the limits that your hardware can handle. I believe that's what you are looking for. What do you think?

hooke007 commented 1 year ago

it will be resized to 1108x831

Tip: odd number of length is not allowed for subsampling

foxbox93 commented 1 year ago

Yeah, that's a better solution than I said. If I summarize the tasks that need to be solved, we should look at them like this.

  1. Adjust the size differently depending on the step

  2. The need to be further reduced pixel due to the imbalance according to the ratio

  3. The goal is to aim for a length with even numbers

But I found one more thing to worry about here.

Targeting pixels alone can exceed the height or length of the monitor's maximum resolution.

For example, my monitor has a maximum resolution of 3840 * 2160.

For resize_resolution_before_upscale=1280x720 (total number of pixels 921,600), do the following:

Attempting upscaling will result in 1656 * 2208 and exceed the maximum height.

As far as I know, it doesn't work if the upscaling result exceeds the height and width of 2160p.

Therefore, it is necessary to design so that it does not exceed the maximum horizontal and vertical parts.

151515

So I'm thinking about using screeninfo to extract the display value of the monitor and then upscale it flexibly in any situation.

but, I have something important to do tomorrow, so I'll think about it up to here and give you another opinion next week.

If you're thinking of another good way, please comment.

EDIT 04/30

1

It would be nice to have a function to extract the resolution information of the monitor using screeninfo.py .

This can solve various problems that arise for situations that are vertical videos or vertical monitors.

However, if the user has a multi-monitor, additional settings are required for which monitor to target. if monitor.is_primary: Only figures for the primary monitor are set to be covered. Therefore, it is necessary to recommend the main monitor just like the screen on which the user will play the video.

2

I've coded animejanai to prepare for videos that play vertically. I write down the example and the actual debug messages together

(1) display resolution : 3840 x 2160 video_before : 1080 x 1920 video_after : 1212 x 2160

2023-04-30 20:21:49 DEBUG Currently, the vertical video upscaling function is deteriorating. Change display_orientation to portrait. 2023-04-30 20:21:49 DEBUG upscale2x: scaling 2x from 303x540 with engine=2x_AnimeJaNai_Strong_V1_UltraCompact_net_g_100000_vertical; num_streams=2.0 2023-04-30 20:21:49 DEBUG upscale22x: scaling 2x from 606x1080 with engine=2x_AnimeJaNai_Strong_V1_SuperUltraCompact_net_g_100000_vertical; num_streams=2.0

(2) display resolution : 2160 x 3840 video_before : 1080 x 1920 video_after : 2160 x 3840

2023-04-30 20:04:30 DEBUG Display_orientation : Portrait 2023-04-30 20:04:30 DEBUG upscale2x: scaling 2x from 540x960 with engine=2x_AnimeJaNai_Strong_V1_UltraCompact_net_g_100000_vertical; num_streams=2.0 2023-04-30 20:04:30 DEBUG upscale22x: scaling 2x from 1080x1920 with engine=2x_AnimeJaNai_Strong_V1_SuperUltraCompact_net_g_100000_vertical; num_streams=2.0

create_engine 3

Modified to make a vertical version engine.

4

I set it to stop when it exceeds 4k and 70fps.

And vertical_engines are marked separately.

8888

This is a value set according to the 4070 ti specification, so I think it needs to be modified for others to use.

+++edit05.03) line250, line254: clip=resize_clip(clip,1080) Avoid out-of-spec resolution errors

Example of a 518400 or lower pixel movie (960 x 540 = 518400, 800 x 600=480000) 960 x 540 > upscale2x > 1920 x 1080 > upscale22x > 3840 x 2160 (ok) 800 x 600 > upscale2x > 1600 x 1200 > upscale22x > 3200 x 2400 (error) 800 x 600 > upscale2x > 1600 x 1200 > resize_to_1080 > 1440 x 1080 > upscale22x > 2880 x 2160 (ok)

attach file

custom.zip

I attached the files that have been modified even a little bit.

There is also screeninfo.