Closed Selur closed 5 months ago
Your error messages are hard to view and have html garbage code. Anyway, it's caused by insufficient VRAM when building the TensorRT engine. You can try a small resolutions like 320x240 and see if it builds fine. If larger resolutions fail to build the engine then just use nvFuser + CUDA Graphs.
Strange, I just tried
# Imports
import vapoursynth as vs
# getting Vapoursynth core
core = vs.core
import site
import os
import ctypes
# Adding torch dependencies to PATH
path = site.getsitepackages()[0]+'/torch_dependencies/bin/'
ctypes.windll.kernel32.SetDllDirectoryW(path)
path = path.replace('\\', '/')
os.environ["PATH"] = path + os.pathsep + os.environ["PATH"]
os.environ["CUDA_MODULE_LOADING"] = "LAZY"
# Loading Plugins
core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/Support/fmtconv.dll")
core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/SourceFilter/LSmashSource/vslsmashsource.dll")
# source: 'G:\TestClips&Co\test.avi'
# current color space: YUV420P8, bit depth: 8, resolution: 640x352, fps: 25, color matrix: 470bg, yuv luminance scale: limited, scanorder: progressive
# Loading G:\TestClips&Co\test.avi using LWLibavSource
clip = core.lsmas.LWLibavSource(source="G:/TestClips&Co/test.avi", format="YUV420P8", stream_index=0, cache=0, prefer_hw=0)
# Setting detected color matrix (470bg).
clip = core.std.SetFrameProps(clip, _Matrix=5)
# Setting color transfer info (470bg), when it is not set
clip = clip if not core.text.FrameProps(clip,'_Transfer') else core.std.SetFrameProps(clip, _Transfer=5)
# Setting color primaries info (), when it is not set
clip = clip if not core.text.FrameProps(clip,'_Primaries') else core.std.SetFrameProps(clip, _Primaries=5)
# Setting color range to TV (limited) range.
clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
# making sure frame rate is set to 25
clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
clip = core.std.SetFrameProp(clip=clip, prop="_FieldBased", intval=0) # progressive
from vsfemasr import femasr as FeMaSR
# adjusting color space from YUV420P8 to RGBH for VsFeMaSR
clip = core.resize.Bicubic(clip=clip, format=vs.RGBH, matrix_in_s="470bg", range_s="limited")
# resizing using FeMaSR
clip = FeMaSR(clip=clip, device_index=0, trt=True, trt_cache_path=r"J:\tmp") # 1280x704
# resizing 1280x704 to 640x352
# adjusting resizing
clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, range_s="limited")
clip = core.fmtc.resample(clip=clip, w=640, h=352, kernel="lanczos", interlaced=False, interlacedd=False)
# adjusting output color from: RGBS to YUV420P10 for x265Model
clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P10, matrix_s="470bg", range_s="limited", dither_type="error_diffusion")
# set output frame rate to 25fps (progressive)
clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
# Output
clip.set_output()
and monitored the VRAM usage and it only got up to 8.4 of 16GB (using a Geforce RTX 4080) for a short time and manly was around 7GB, I still got:
2023-03-26 19:54:49.029
Skip rewriting leaf module
Skip rewriting leaf module
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\tracer\acc_tracer\acc_tracer.py:584: UserWarning: acc_tracer does not support currently support models for training. Calling eval on model before tracing.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\tracer\acc_tracer\acc_tracer.py:584: UserWarning: acc_tracer does not support currently support models for training. Calling eval on model before tracing.
warnings.warn(
Skip rewriting leaf module
Skip rewriting leaf module
== Log pass before/after graph to C:\Users\Selur\AppData\Local\Temp\tmpffq7z0qh, before/after are the same = False
== Log pass before/after graph to C:\Users\Selur\AppData\Local\Temp\tmpffq7z0qh, before/after are the same = False
== Log pass before/after graph to C:\Users\Selur\AppData\Local\Temp\tmpcq83atls, before/after are the same = True
== Log pass before/after graph to C:\Users\Selur\AppData\Local\Temp\tmpcq83atls, before/after are the same = True
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_319
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_319
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_326
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_326
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_329
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_329
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_331
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_331
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_332
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_332
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_339
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_339
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_342
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_342
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_344
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_344
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_345
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_345
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_352
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_352
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_355
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_355
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_357
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_357
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_358
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_358
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_365
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_365
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_368
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_368
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_370
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_370
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_371
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_371
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_378
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_378
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_381
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_381
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_383
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_383
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_384
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_384
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_391
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_391
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_394
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_394
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_396
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_396
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_397
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_397
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_398
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_398
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_405
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_405
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_408
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_408
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_410
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_410
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_411
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_411
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_418
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_418
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_421
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_421
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_423
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_423
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_424
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_424
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_431
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_431
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_434
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_434
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_436
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_436
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_437
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_437
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_444
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_444
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_447
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_447
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_449
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_449
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_450
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_450
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_457
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_457
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_460
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_460
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_462
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_462
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_463
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_463
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_470
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_470
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_473
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_473
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_475
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_475
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_476
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_476
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_477
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_477
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_484
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_484
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_487
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_487
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_489
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_489
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_490
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_490
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_497
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_497
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_500
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_500
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_502
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_502
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_503
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_503
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_510
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_510
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_513
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_513
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_515
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_515
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_516
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_516
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_523
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_523
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_526
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_526
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_528
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_528
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_529
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_529
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_536
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_536
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_539
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_539
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_541
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_541
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_542
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_542
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_549
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_549
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_552
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_552
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_554
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_554
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_555
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_555
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_556
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_556
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_563
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_563
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_566
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_566
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_568
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_568
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_569
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_569
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_576
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_576
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_579
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_579
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_581
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_581
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_582
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_582
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_589
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_589
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_592
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_592
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_594
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_594
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_595
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_595
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_602
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_602
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_605
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_605
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_607
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_607
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_608
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_608
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_615
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_615
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_618
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_618
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_620
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_620
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_621
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_621
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_628
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_628
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_631
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_631
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_633
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_633
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_634
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_634
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_635
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_635
Now lowering submodule _run_on_acc_0
Now lowering submodule _run_on_acc_0
split_name=_run_on_acc_0, input_specs=[InputTensorSpec(shape=torch.Size([1, 3, 352, 640]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_0, input_specs=[InputTensorSpec(shape=torch.Size([1, 3, 352, 640]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.001383
TRT INetwork construction elapsed time: 0:00:00.001383
2023-03-26 19:54:58.127
Build TRT engine elapsed time: 0:00:02.936065
Build TRT engine elapsed time: 0:00:02.936065
Lowering submodule _run_on_acc_0 elapsed time 0:00:04.452679
Lowering submodule _run_on_acc_0 elapsed time 0:00:04.452679
Now lowering submodule _run_on_acc_2
Now lowering submodule _run_on_acc_2
split_name=_run_on_acc_2, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 176, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_2, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 176, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.002001
TRT INetwork construction elapsed time: 0:00:00.002001
2023-03-26 19:55:19.541
Build TRT engine elapsed time: 0:00:21.371700
Build TRT engine elapsed time: 0:00:21.371700
Lowering submodule _run_on_acc_2 elapsed time 0:00:21.400930
Lowering submodule _run_on_acc_2 elapsed time 0:00:21.400930
Now lowering submodule _run_on_acc_4
Now lowering submodule _run_on_acc_4
split_name=_run_on_acc_4, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 176, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 176, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_4, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 176, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 176, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.002000
TRT INetwork construction elapsed time: 0:00:00.002000
2023-03-26 19:55:23.835
Build TRT engine elapsed time: 0:00:04.250712
Build TRT engine elapsed time: 0:00:04.250712
Lowering submodule _run_on_acc_4 elapsed time 0:00:04.281701
Lowering submodule _run_on_acc_4 elapsed time 0:00:04.281701
Now lowering submodule _run_on_acc_6
Now lowering submodule _run_on_acc_6
split_name=_run_on_acc_6, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 176, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_6, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 176, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.001000
TRT INetwork construction elapsed time: 0:00:00.001000
2023-03-26 19:55:28.104
Build TRT engine elapsed time: 0:00:04.225747
Build TRT engine elapsed time: 0:00:04.225747
Lowering submodule _run_on_acc_6 elapsed time 0:00:04.254747
Lowering submodule _run_on_acc_6 elapsed time 0:00:04.254747
Now lowering submodule _run_on_acc_8
Now lowering submodule _run_on_acc_8
split_name=_run_on_acc_8, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 176, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 176, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_8, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 176, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 176, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.003000
TRT INetwork construction elapsed time: 0:00:00.003000
2023-03-26 19:55:33.197
Build TRT engine elapsed time: 0:00:05.048240
Build TRT engine elapsed time: 0:00:05.048240
Lowering submodule _run_on_acc_8 elapsed time 0:00:05.080299
Lowering submodule _run_on_acc_8 elapsed time 0:00:05.080299
Now lowering submodule _run_on_acc_10
Now lowering submodule _run_on_acc_10
split_name=_run_on_acc_10, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 88, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_10, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 88, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.001349
TRT INetwork construction elapsed time: 0:00:00.001349
Build TRT engine elapsed time: 0:00:01.816100
Build TRT engine elapsed time: 0:00:01.816100
Lowering submodule _run_on_acc_10 elapsed time 0:00:01.845110
Lowering submodule _run_on_acc_10 elapsed time 0:00:01.845110
Now lowering submodule _run_on_acc_12
Now lowering submodule _run_on_acc_12
split_name=_run_on_acc_12, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 88, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 88, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_12, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 88, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 88, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.002249
TRT INetwork construction elapsed time: 0:00:00.002249
Build TRT engine elapsed time: 0:00:01.737383
Build TRT engine elapsed time: 0:00:01.737383
Lowering submodule _run_on_acc_12 elapsed time 0:00:01.770741
Lowering submodule _run_on_acc_12 elapsed time 0:00:01.770741
Now lowering submodule _run_on_acc_14
Now lowering submodule _run_on_acc_14
split_name=_run_on_acc_14, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 88, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_14, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 88, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.001000
TRT INetwork construction elapsed time: 0:00:00.001000
Build TRT engine elapsed time: 0:00:01.776626
Build TRT engine elapsed time: 0:00:01.776626
Lowering submodule _run_on_acc_14 elapsed time 0:00:01.807626
Lowering submodule _run_on_acc_14 elapsed time 0:00:01.807626
Now lowering submodule _run_on_acc_16
Now lowering submodule _run_on_acc_16
split_name=_run_on_acc_16, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 88, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 88, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_16, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 88, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 88, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.001001
TRT INetwork construction elapsed time: 0:00:00.001001
Build TRT engine elapsed time: 0:00:01.797982
Build TRT engine elapsed time: 0:00:01.797982
Lowering submodule _run_on_acc_16 elapsed time 0:00:01.827555
Lowering submodule _run_on_acc_16 elapsed time 0:00:01.827555
Now lowering submodule _run_on_acc_18
Now lowering submodule _run_on_acc_18
split_name=_run_on_acc_18, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 14080]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_18, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 14080]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
2023-03-26 19:55:40.535
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
2023-03-26 19:55:40.542
TRT INetwork construction elapsed time: 0:00:00.007002
TRT INetwork construction elapsed time: 0:00:00.007002
2023-03-26 19:56:02.334
Build TRT engine elapsed time: 0:00:21.785003
Build TRT engine elapsed time: 0:00:21.785003
Lowering submodule _run_on_acc_18 elapsed time 0:00:21.820629
Lowering submodule _run_on_acc_18 elapsed time 0:00:21.820629
Now lowering submodule _run_on_acc_20
Now lowering submodule _run_on_acc_20
split_name=_run_on_acc_20, input_specs=[InputTensorSpec(shape=torch.Size([1, 88, 160, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 88, 160, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_20, input_specs=[InputTensorSpec(shape=torch.Size([1, 88, 160, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 88, 160, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_144 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_144 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_145 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_145 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_146 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_146 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_147 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_147 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.032007
TRT INetwork construction elapsed time: 0:00:00.032007
Build TRT engine elapsed time: 0:00:00.427231
Build TRT engine elapsed time: 0:00:00.427231
Lowering submodule _run_on_acc_20 elapsed time 0:00:00.488966
Lowering submodule _run_on_acc_20 elapsed time 0:00:00.488966
Now lowering submodule _run_on_acc_22
Now lowering submodule _run_on_acc_22
split_name=_run_on_acc_22, input_specs=[InputTensorSpec(shape=torch.Size([220, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([220, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 220, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_22, input_specs=[InputTensorSpec(shape=torch.Size([220, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([220, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 220, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_148 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_148 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_149 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_149 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.019005
TRT INetwork construction elapsed time: 0:00:00.019005
2023-03-26 19:56:08.464
Failed to evaluate the script:
Python exception:
Traceback (most recent call last):
File "src\cython\vapoursynth.pyx", line 2866, in vapoursynth._vpy_evaluate
File "src\cython\vapoursynth.pyx", line 2867, in vapoursynth._vpy_evaluate
File "J:\tmp\tempPreviewVapoursynthFile19_54_46_507.vpy", line 36, in
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsfemasr\__init__.py", line 171, in femasr
module = lowerer(
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\lower.py", line 323, in __call__
return do_lower(module, inputs)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\passes\pass_utils.py", line 117, in pass_with_validation
processed_module = pass_(module, input, *args, **kwargs)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\lower.py", line 320, in do_lower
lower_result = pm(module)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\fx\passes\pass_manager.py", line 240, in __call__
out = _pass(out)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\fx\passes\pass_manager.py", line 240, in __call__
out = _pass(out)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\passes\lower_pass_manager_builder.py", line 167, in lower_func
lowered_module = self._lower_func(
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\lower.py", line 180, in lower_pass
interp_res: TRTInterpreterResult = interpreter(mod, input, module_name)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\lower.py", line 132, in __call__
interp_result: TRTInterpreterResult = interpreter.run(
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\fx2trt.py", line 252, in run
assert engine
AssertionError
2023-03-26 19:56:08.622
[VSE Server]: incoming connection
[VSE Server]: ConnectedState
[VSE Server]: socket is ready to be read
[VSE Server]: connection open: true
[VSE Server]: connection readable: true
[VSE Server] - Message received: changeTo ### J:\tmp\tempPreviewVapoursynthFile19_55_25_204.vpy ### off#0#0#0#0
2023-03-26 19:56:09.052
Skip rewriting leaf module
Skip rewriting leaf module
Skip rewriting leaf module
Skip rewriting leaf module
2023-03-26 19:56:14.029
== Log pass before/after graph to C:\Users\Selur\AppData\Local\Temp\tmpreq3dqbl, before/after are the same = False
== Log pass before/after graph to C:\Users\Selur\AppData\Local\Temp\tmpreq3dqbl, before/after are the same = False
== Log pass before/after graph to C:\Users\Selur\AppData\Local\Temp\tmpgrxuz6hl, before/after are the same = True
== Log pass before/after graph to C:\Users\Selur\AppData\Local\Temp\tmpgrxuz6hl, before/after are the same = True
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_319
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_319
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_326
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_326
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_329
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_329
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_331
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_331
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_332
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_332
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_339
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_339
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_342
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_342
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_344
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_344
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_345
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_345
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_352
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_352
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_355
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_355
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_357
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_357
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_358
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_358
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_365
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_365
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_368
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_368
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_370
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_370
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_371
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_371
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_378
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_378
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_381
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_381
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_383
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_383
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_384
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_384
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_391
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_391
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_394
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_394
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_396
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_396
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_397
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_397
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_398
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_398
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_405
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_405
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_408
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_408
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_410
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_410
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_411
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_411
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_418
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_418
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_421
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_421
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_423
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_423
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_424
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_424
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_431
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_431
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_434
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_434
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_436
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_436
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_437
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_437
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_444
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_444
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_447
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_447
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_449
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_449
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_450
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_450
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_457
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_457
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_460
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_460
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_462
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_462
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_463
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_463
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_470
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_470
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_473
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_473
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_475
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_475
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_476
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_476
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_477
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_477
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_484
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_484
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_487
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_487
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_489
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_489
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_490
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_490
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_497
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_497
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_500
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_500
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_502
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_502
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_503
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_503
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_510
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_510
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_513
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_513
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_515
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_515
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_516
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_516
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_523
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_523
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_526
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_526
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_528
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_528
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_529
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_529
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_536
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_536
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_539
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_539
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_541
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_541
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_542
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_542
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_549
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_549
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_552
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_552
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_554
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_554
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_555
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_555
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_556
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_556
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_563
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_563
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_566
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_566
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_568
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_568
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_569
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_569
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_576
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_576
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_579
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_579
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_581
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_581
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_582
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_582
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_589
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_589
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_592
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_592
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_594
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_594
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_595
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_595
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_602
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_602
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_605
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_605
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_607
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_607
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_608
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_608
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_615
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_615
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_618
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_618
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_620
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_620
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_621
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_621
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_628
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_628
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_631
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_631
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_633
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_633
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_634
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_634
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_635
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_635
Now lowering submodule _run_on_acc_0
Now lowering submodule _run_on_acc_0
split_name=_run_on_acc_0, input_specs=[InputTensorSpec(shape=torch.Size([1, 3, 352, 640]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_0, input_specs=[InputTensorSpec(shape=torch.Size([1, 3, 352, 640]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.416462
TRT INetwork construction elapsed time: 0:00:00.416462
2023-03-26 19:56:20.745
Build TRT engine elapsed time: 0:00:02.908010
Build TRT engine elapsed time: 0:00:02.908010
Lowering submodule _run_on_acc_0 elapsed time 0:00:03.358515
Lowering submodule _run_on_acc_0 elapsed time 0:00:03.358515
Now lowering submodule _run_on_acc_2
Now lowering submodule _run_on_acc_2
split_name=_run_on_acc_2, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 176, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_2, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 176, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.001000
TRT INetwork construction elapsed time: 0:00:00.001000
2023-03-26 19:56:25.139
Build TRT engine elapsed time: 0:00:04.345430
Build TRT engine elapsed time: 0:00:04.345430
Lowering submodule _run_on_acc_2 elapsed time 0:00:04.378391
Lowering submodule _run_on_acc_2 elapsed time 0:00:04.378391
Now lowering submodule _run_on_acc_4
Now lowering submodule _run_on_acc_4
split_name=_run_on_acc_4, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 176, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 176, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_4, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 176, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 176, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.001000
TRT INetwork construction elapsed time: 0:00:00.001000
2023-03-26 19:56:29.557
Build TRT engine elapsed time: 0:00:04.366875
Build TRT engine elapsed time: 0:00:04.366875
Lowering submodule _run_on_acc_4 elapsed time 0:00:04.403853
Lowering submodule _run_on_acc_4 elapsed time 0:00:04.403853
Now lowering submodule _run_on_acc_6
Now lowering submodule _run_on_acc_6
split_name=_run_on_acc_6, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 176, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_6, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 176, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.002001
TRT INetwork construction elapsed time: 0:00:00.002001
2023-03-26 19:56:33.951
Build TRT engine elapsed time: 0:00:04.345732
Build TRT engine elapsed time: 0:00:04.345732
Lowering submodule _run_on_acc_6 elapsed time 0:00:04.378103
Lowering submodule _run_on_acc_6 elapsed time 0:00:04.378103
Now lowering submodule _run_on_acc_8
Now lowering submodule _run_on_acc_8
split_name=_run_on_acc_8, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 176, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 176, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_8, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 176, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 176, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.003001
TRT INetwork construction elapsed time: 0:00:00.003001
2023-03-26 19:56:39.176
Build TRT engine elapsed time: 0:00:05.176863
Build TRT engine elapsed time: 0:00:05.176863
Lowering submodule _run_on_acc_8 elapsed time 0:00:05.211868
Lowering submodule _run_on_acc_8 elapsed time 0:00:05.211868
Now lowering submodule _run_on_acc_10
Now lowering submodule _run_on_acc_10
split_name=_run_on_acc_10, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 88, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_10, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 88, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.001001
TRT INetwork construction elapsed time: 0:00:00.001001
Build TRT engine elapsed time: 0:00:01.780700
Build TRT engine elapsed time: 0:00:01.780700
Lowering submodule _run_on_acc_10 elapsed time 0:00:01.815286
Lowering submodule _run_on_acc_10 elapsed time 0:00:01.815286
Now lowering submodule _run_on_acc_12
Now lowering submodule _run_on_acc_12
split_name=_run_on_acc_12, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 88, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 88, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_12, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 88, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 88, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.002000
TRT INetwork construction elapsed time: 0:00:00.002000
Build TRT engine elapsed time: 0:00:01.784465
Build TRT engine elapsed time: 0:00:01.784465
Lowering submodule _run_on_acc_12 elapsed time 0:00:01.820483
Lowering submodule _run_on_acc_12 elapsed time 0:00:01.820483
Now lowering submodule _run_on_acc_14
Now lowering submodule _run_on_acc_14
split_name=_run_on_acc_14, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 88, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_14, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 88, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.002000
TRT INetwork construction elapsed time: 0:00:00.002000
Build TRT engine elapsed time: 0:00:01.792631
Build TRT engine elapsed time: 0:00:01.792631
Lowering submodule _run_on_acc_14 elapsed time 0:00:01.824090
Lowering submodule _run_on_acc_14 elapsed time 0:00:01.824090
Now lowering submodule _run_on_acc_16
Now lowering submodule _run_on_acc_16
split_name=_run_on_acc_16, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 88, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 88, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_16, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 88, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 88, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.001000
TRT INetwork construction elapsed time: 0:00:00.001000
Build TRT engine elapsed time: 0:00:01.791811
Build TRT engine elapsed time: 0:00:01.791811
Lowering submodule _run_on_acc_16 elapsed time 0:00:01.822825
Lowering submodule _run_on_acc_16 elapsed time 0:00:01.822825
Now lowering submodule _run_on_acc_18
Now lowering submodule _run_on_acc_18
split_name=_run_on_acc_18, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 14080]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_18, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 14080]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
2023-03-26 19:56:46.553
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
2023-03-26 19:56:46.561
TRT INetwork construction elapsed time: 0:00:00.008002
TRT INetwork construction elapsed time: 0:00:00.008002
2023-03-26 19:56:52.869
Build TRT engine elapsed time: 0:00:06.301220
Build TRT engine elapsed time: 0:00:06.301220
Lowering submodule _run_on_acc_18 elapsed time 0:00:06.339860
Lowering submodule _run_on_acc_18 elapsed time 0:00:06.339860
Now lowering submodule _run_on_acc_20
Now lowering submodule _run_on_acc_20
split_name=_run_on_acc_20, input_specs=[InputTensorSpec(shape=torch.Size([1, 88, 160, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 88, 160, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_20, input_specs=[InputTensorSpec(shape=torch.Size([1, 88, 160, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 88, 160, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.004995
TRT INetwork construction elapsed time: 0:00:00.004995
Build TRT engine elapsed time: 0:00:00.415419
Build TRT engine elapsed time: 0:00:00.415419
Lowering submodule _run_on_acc_20 elapsed time 0:00:00.451659
Lowering submodule _run_on_acc_20 elapsed time 0:00:00.451659
Now lowering submodule _run_on_acc_22
Now lowering submodule _run_on_acc_22
split_name=_run_on_acc_22, input_specs=[InputTensorSpec(shape=torch.Size([220, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([220, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 220, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_22, input_specs=[InputTensorSpec(shape=torch.Size([220, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([220, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 220, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.005986
TRT INetwork construction elapsed time: 0:00:00.005986
2023-03-26 19:57:08.059
Failed to evaluate the script:
Python exception:
Traceback (most recent call last):
File "src\cython\vapoursynth.pyx", line 2866, in vapoursynth._vpy_evaluate
File "src\cython\vapoursynth.pyx", line 2867, in vapoursynth._vpy_evaluate
File "J:\tmp\tempPreviewVapoursynthFile19_55_25_204.vpy", line 36, in
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsfemasr\__init__.py", line 171, in femasr
module = lowerer(
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\lower.py", line 323, in __call__
return do_lower(module, inputs)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\passes\pass_utils.py", line 117, in pass_with_validation
processed_module = pass_(module, input, *args, **kwargs)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\lower.py", line 320, in do_lower
lower_result = pm(module)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\fx\passes\pass_manager.py", line 240, in __call__
out = _pass(out)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\fx\passes\pass_manager.py", line 240, in __call__
out = _pass(out)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\passes\lower_pass_manager_builder.py", line 167, in lower_func
lowered_module = self._lower_func(
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\lower.py", line 180, in lower_pass
interp_res: TRTInterpreterResult = interpreter(mod, input, module_name)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\lower.py", line 132, in __call__
interp_result: TRTInterpreterResult = interpreter.run(
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\fx2trt.py", line 252, in run
assert engine
AssertionError
2023-03-26 19:57:08.195
[VSE Server]: socket is ready to be read
[VSE Server]: connection open: true
[VSE Server]: connection readable: true
[VSE Server] - Message received: changeTo ### J:\tmp\tempPreviewVapoursynthFile19_56_10_330.vpy ### off#0#0#0#0
2023-03-26 19:57:08.704
Skip rewriting leaf module
Skip rewriting leaf module
Skip rewriting leaf module
Skip rewriting leaf module
2023-03-26 19:57:13.600
== Log pass before/after graph to C:\Users\Selur\AppData\Local\Temp\tmpa_f7h_70, before/after are the same = False
== Log pass before/after graph to C:\Users\Selur\AppData\Local\Temp\tmpa_f7h_70, before/after are the same = False
== Log pass before/after graph to C:\Users\Selur\AppData\Local\Temp\tmp6nkjkumw, before/after are the same = True
== Log pass before/after graph to C:\Users\Selur\AppData\Local\Temp\tmp6nkjkumw, before/after are the same = True
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_319
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_319
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_326
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_326
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_329
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_329
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_331
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_331
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_332
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_332
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_339
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_339
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_342
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_342
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_344
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_344
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_345
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_345
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_352
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_352
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_355
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_355
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_357
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_357
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_358
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_358
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_365
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_365
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_368
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_368
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_370
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_370
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_371
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_371
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_378
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_378
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_381
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_381
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_383
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_383
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_384
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_384
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_391
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_391
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_394
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_394
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_396
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_396
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_397
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_397
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_398
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_398
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_405
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_405
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_408
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_408
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_410
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_410
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_411
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_411
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_418
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_418
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_421
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_421
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_423
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_423
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_424
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_424
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_431
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_431
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_434
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_434
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_436
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_436
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_437
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_437
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_444
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_444
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_447
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_447
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_449
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_449
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_450
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_450
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_457
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_457
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_460
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_460
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_462
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_462
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_463
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_463
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_470
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_470
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_473
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_473
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_475
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_475
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_476
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_476
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_477
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_477
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_484
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_484
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_487
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_487
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_489
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_489
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_490
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_490
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_497
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_497
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_500
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_500
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_502
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_502
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_503
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_503
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_510
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_510
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_513
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_513
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_515
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_515
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_516
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_516
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_523
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_523
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_526
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_526
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_528
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_528
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_529
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_529
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_536
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_536
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_539
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_539
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_541
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_541
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_542
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_542
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_549
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_549
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_552
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_552
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_554
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_554
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_555
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_555
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_556
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_556
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_563
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_563
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_566
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_566
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_568
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_568
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_569
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_569
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_576
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_576
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_579
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_579
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_581
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_581
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_582
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_582
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_589
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_589
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_592
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_592
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_594
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_594
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_595
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_595
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_602
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_602
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_605
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_605
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_607
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_607
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_608
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_608
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_615
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_615
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_618
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_618
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_620
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_620
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_621
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_621
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_628
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_628
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_631
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_631
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_633
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_633
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_634
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_634
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_635
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_635
Now lowering submodule _run_on_acc_0
Now lowering submodule _run_on_acc_0
split_name=_run_on_acc_0, input_specs=[InputTensorSpec(shape=torch.Size([1, 3, 352, 640]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_0, input_specs=[InputTensorSpec(shape=torch.Size([1, 3, 352, 640]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.283551
TRT INetwork construction elapsed time: 0:00:00.283551
2023-03-26 19:57:25.432
Build TRT engine elapsed time: 0:00:02.833113
Build TRT engine elapsed time: 0:00:02.833113
Lowering submodule _run_on_acc_0 elapsed time 0:00:03.308137
Lowering submodule _run_on_acc_0 elapsed time 0:00:03.308137
Now lowering submodule _run_on_acc_2
Now lowering submodule _run_on_acc_2
split_name=_run_on_acc_2, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 176, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_2, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 176, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.002000
TRT INetwork construction elapsed time: 0:00:00.002000
2023-03-26 19:57:29.901
Build TRT engine elapsed time: 0:00:04.184779
Build TRT engine elapsed time: 0:00:04.184779
Lowering submodule _run_on_acc_2 elapsed time 0:00:04.375815
Lowering submodule _run_on_acc_2 elapsed time 0:00:04.375815
Now lowering submodule _run_on_acc_4
Now lowering submodule _run_on_acc_4
split_name=_run_on_acc_4, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 176, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 176, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_4, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 176, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 176, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.001000
TRT INetwork construction elapsed time: 0:00:00.001000
2023-03-26 19:57:34.431
Build TRT engine elapsed time: 0:00:04.242936
Build TRT engine elapsed time: 0:00:04.242936
Lowering submodule _run_on_acc_4 elapsed time 0:00:04.439479
Lowering submodule _run_on_acc_4 elapsed time 0:00:04.439479
Now lowering submodule _run_on_acc_6
Now lowering submodule _run_on_acc_6
split_name=_run_on_acc_6, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 176, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_6, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 176, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.001000
TRT INetwork construction elapsed time: 0:00:00.001000
2023-03-26 19:57:38.908
Build TRT engine elapsed time: 0:00:04.185875
Build TRT engine elapsed time: 0:00:04.185875
Lowering submodule _run_on_acc_6 elapsed time 0:00:04.380859
Lowering submodule _run_on_acc_6 elapsed time 0:00:04.380859
Now lowering submodule _run_on_acc_8
Now lowering submodule _run_on_acc_8
split_name=_run_on_acc_8, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 176, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 176, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_8, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 176, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 176, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.003000
TRT INetwork construction elapsed time: 0:00:00.003000
2023-03-26 19:57:44.242
Build TRT engine elapsed time: 0:00:05.038204
Build TRT engine elapsed time: 0:00:05.038204
Lowering submodule _run_on_acc_8 elapsed time 0:00:05.239721
Lowering submodule _run_on_acc_8 elapsed time 0:00:05.239721
Now lowering submodule _run_on_acc_10
Now lowering submodule _run_on_acc_10
split_name=_run_on_acc_10, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 88, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_10, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 88, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.001001
TRT INetwork construction elapsed time: 0:00:00.001001
Build TRT engine elapsed time: 0:00:01.729166
Build TRT engine elapsed time: 0:00:01.729166
Lowering submodule _run_on_acc_10 elapsed time 0:00:01.926857
Lowering submodule _run_on_acc_10 elapsed time 0:00:01.926857
Now lowering submodule _run_on_acc_12
Now lowering submodule _run_on_acc_12
split_name=_run_on_acc_12, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 88, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 88, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_12, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 88, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 88, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.002000
TRT INetwork construction elapsed time: 0:00:00.002000
Build TRT engine elapsed time: 0:00:01.723373
Build TRT engine elapsed time: 0:00:01.723373
Lowering submodule _run_on_acc_12 elapsed time 0:00:01.923711
Lowering submodule _run_on_acc_12 elapsed time 0:00:01.923711
Now lowering submodule _run_on_acc_14
Now lowering submodule _run_on_acc_14
split_name=_run_on_acc_14, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 88, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_14, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 88, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.002000
TRT INetwork construction elapsed time: 0:00:00.002000
Build TRT engine elapsed time: 0:00:01.731051
Build TRT engine elapsed time: 0:00:01.731051
Lowering submodule _run_on_acc_14 elapsed time 0:00:01.931001
Lowering submodule _run_on_acc_14 elapsed time 0:00:01.931001
Now lowering submodule _run_on_acc_16
Now lowering submodule _run_on_acc_16
split_name=_run_on_acc_16, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 88, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 88, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_16, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 88, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 88, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.000999
TRT INetwork construction elapsed time: 0:00:00.000999
Build TRT engine elapsed time: 0:00:01.724712
Build TRT engine elapsed time: 0:00:01.724712
Lowering submodule _run_on_acc_16 elapsed time 0:00:01.928761
Lowering submodule _run_on_acc_16 elapsed time 0:00:01.928761
Now lowering submodule _run_on_acc_18
Now lowering submodule _run_on_acc_18
split_name=_run_on_acc_18, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 14080]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_18, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 14080]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
2023-03-26 19:57:52.589
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
2023-03-26 19:57:52.640
TRT INetwork construction elapsed time: 0:00:00.052510
TRT INetwork construction elapsed time: 0:00:00.052510
2023-03-26 19:57:58.768
Build TRT engine elapsed time: 0:00:06.077423
Build TRT engine elapsed time: 0:00:06.077423
Lowering submodule _run_on_acc_18 elapsed time 0:00:06.330382
Lowering submodule _run_on_acc_18 elapsed time 0:00:06.330382
Now lowering submodule _run_on_acc_20
Now lowering submodule _run_on_acc_20
split_name=_run_on_acc_20, input_specs=[InputTensorSpec(shape=torch.Size([1, 88, 160, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 88, 160, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_20, input_specs=[InputTensorSpec(shape=torch.Size([1, 88, 160, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 88, 160, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.006000
TRT INetwork construction elapsed time: 0:00:00.006000
Build TRT engine elapsed time: 0:00:00.414580
Build TRT engine elapsed time: 0:00:00.414580
Lowering submodule _run_on_acc_20 elapsed time 0:00:00.638044
Lowering submodule _run_on_acc_20 elapsed time 0:00:00.638044
Now lowering submodule _run_on_acc_22
Now lowering submodule _run_on_acc_22
split_name=_run_on_acc_22, input_specs=[InputTensorSpec(shape=torch.Size([220, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([220, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 220, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_22, input_specs=[InputTensorSpec(shape=torch.Size([220, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([220, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 220, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.007001
TRT INetwork construction elapsed time: 0:00:00.007001
2023-03-26 19:58:04.459
Failed to evaluate the script:
Python exception:
Traceback (most recent call last):
File "src\cython\vapoursynth.pyx", line 2866, in vapoursynth._vpy_evaluate
File "src\cython\vapoursynth.pyx", line 2867, in vapoursynth._vpy_evaluate
File "J:\tmp\tempPreviewVapoursynthFile19_56_10_330.vpy", line 36, in
clip = FeMaSR(clip=clip, device_index=0, trt=True, trt_cache_path=r"J:\tmp") # 1280x704
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsfemasr\__init__.py", line 171, in femasr
module = lowerer(
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\lower.py", line 323, in __call__
return do_lower(module, inputs)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\passes\pass_utils.py", line 117, in pass_with_validation
processed_module = pass_(module, input, *args, **kwargs)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\lower.py", line 320, in do_lower
lower_result = pm(module)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\fx\passes\pass_manager.py", line 240, in __call__
out = _pass(out)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\fx\passes\pass_manager.py", line 240, in __call__
out = _pass(out)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\passes\lower_pass_manager_builder.py", line 167, in lower_func
lowered_module = self._lower_func(
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\lower.py", line 180, in lower_pass
interp_res: TRTInterpreterResult = interpreter(mod, input, module_name)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\lower.py", line 132, in __call__
interp_result: TRTInterpreterResult = interpreter.run(
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\fx2trt.py", line 252, in run
assert engine
AssertionError
Using lower resolution:
# Imports
import vapoursynth as vs
# getting Vapoursynth core
core = vs.core
import site
import os
import ctypes
# Adding torch dependencies to PATH
path = site.getsitepackages()[0]+'/torch_dependencies/bin/'
ctypes.windll.kernel32.SetDllDirectoryW(path)
path = path.replace('\\', '/')
os.environ["PATH"] = path + os.pathsep + os.environ["PATH"]
os.environ["CUDA_MODULE_LOADING"] = "LAZY"
# Loading Plugins
core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/Support/fmtconv.dll")
core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/SourceFilter/LSmashSource/vslsmashsource.dll")
# source: 'G:\TestClips&Co\test.avi'
# current color space: YUV420P8, bit depth: 8, resolution: 640x352, fps: 25, color matrix: 470bg, yuv luminance scale: limited, scanorder: progressive
# Loading G:\TestClips&Co\test.avi using LWLibavSource
clip = core.lsmas.LWLibavSource(source="G:/TestClips&Co/test.avi", format="YUV420P8", stream_index=0, cache=0, prefer_hw=0)
# Setting detected color matrix (470bg).
clip = core.std.SetFrameProps(clip, _Matrix=5)
# Setting color transfer info (470bg), when it is not set
clip = clip if not core.text.FrameProps(clip,'_Transfer') else core.std.SetFrameProps(clip, _Transfer=5)
# Setting color primaries info (), when it is not set
clip = clip if not core.text.FrameProps(clip,'_Primaries') else core.std.SetFrameProps(clip, _Primaries=5)
# Setting color range to TV (limited) range.
clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
# making sure frame rate is set to 25
clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
clip = core.std.SetFrameProp(clip=clip, prop="_FieldBased", intval=0) # progressive
# adjusting resolution before resizing
clip = core.fmtc.resample(clip=clip, w=320, h=176, kernel="lanczos", interlaced=False, interlacedd=False)# before YUV420P8 after YUV420P16
from vsfemasr import femasr as FeMaSR
# adjusting color space from YUV420P16 to RGBH for VsFeMaSR
clip = core.resize.Bicubic(clip=clip, format=vs.RGBH, matrix_in_s="470bg", range_s="limited")
# resizing using FeMaSR
clip = FeMaSR(clip=clip, device_index=0, trt=True, trt_cache_path=r"J:\tmp") # 640x352
# adjusting output color from: RGBH to YUV420P10 for x265Model
clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P10, matrix_s="470bg", range_s="limited", dither_type="error_diffusion")
# set output frame rate to 25fps (progressive)
clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
# Output
clip.set_output()
doesn't help either:
2023-03-27 05:19:43.377
Skip rewriting leaf module
Skip rewriting leaf module
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\tracer\acc_tracer\acc_tracer.py:584: UserWarning: acc_tracer does not support currently support models for training. Calling eval on model before tracing.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\tracer\acc_tracer\acc_tracer.py:584: UserWarning: acc_tracer does not support currently support models for training. Calling eval on model before tracing.
warnings.warn(
Skip rewriting leaf module
Skip rewriting leaf module
== Log pass before/after graph to C:\Users\Selur\AppData\Local\Temp\tmpjmqoifyb, before/after are the same = False
== Log pass before/after graph to C:\Users\Selur\AppData\Local\Temp\tmpjmqoifyb, before/after are the same = False
== Log pass before/after graph to C:\Users\Selur\AppData\Local\Temp\tmpu2_5tp_s, before/after are the same = True
== Log pass before/after graph to C:\Users\Selur\AppData\Local\Temp\tmpu2_5tp_s, before/after are the same = True
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_319
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_319
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_326
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_326
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_329
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_329
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_331
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_331
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_332
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_332
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_339
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_339
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_342
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_342
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_344
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_344
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_345
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_345
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_352
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_352
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_355
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_355
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_357
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_357
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_358
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_358
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_365
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_365
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_368
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_368
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_370
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_370
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_371
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_371
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_378
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_378
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_381
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_381
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_383
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_383
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_384
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_384
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_391
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_391
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_394
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_394
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_396
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_396
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_397
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_397
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_398
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_398
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_405
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_405
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_408
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_408
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_410
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_410
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_411
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_411
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_418
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_418
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_421
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_421
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_423
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_423
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_424
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_424
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_431
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_431
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_434
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_434
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_436
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_436
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_437
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_437
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_444
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_444
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_447
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_447
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_449
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_449
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_450
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_450
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_457
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_457
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_460
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_460
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_462
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_462
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_463
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_463
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_470
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_470
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_473
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_473
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_475
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_475
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_476
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_476
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_477
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_477
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_484
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_484
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_487
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_487
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_489
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_489
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_490
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_490
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_497
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_497
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_500
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_500
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_502
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_502
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_503
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_503
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_510
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_510
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_513
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_513
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_515
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_515
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_516
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_516
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_523
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_523
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_526
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_526
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_528
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_528
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_529
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_529
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_536
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_536
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_539
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_539
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_541
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_541
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_542
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_542
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_549
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_549
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_552
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_552
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_554
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_554
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_555
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_555
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_556
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_556
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_563
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_563
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_566
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_566
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_568
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_568
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_569
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_569
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_576
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_576
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_579
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_579
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_581
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_581
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_582
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_582
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_589
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_589
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_592
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_592
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_594
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_594
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_595
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_595
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_602
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_602
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_605
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_605
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_607
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_607
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_608
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_608
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_615
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_615
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_618
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_618
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_620
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_620
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_621
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_621
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_628
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_628
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_631
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_631
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_633
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_633
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_634
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_634
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_635
: Found bad pattern: y.reshape((x, ...)). Reshape node: reshape_635
Now lowering submodule _run_on_acc_0
Now lowering submodule _run_on_acc_0
split_name=_run_on_acc_0, input_specs=[InputTensorSpec(shape=torch.Size([1, 3, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_0, input_specs=[InputTensorSpec(shape=torch.Size([1, 3, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
2023-03-27 05:19:52.711
TRT INetwork construction elapsed time: 0:00:00.007916
TRT INetwork construction elapsed time: 0:00:00.007916
Build TRT engine elapsed time: 0:00:01.467011
Build TRT engine elapsed time: 0:00:01.467011
Lowering submodule _run_on_acc_0 elapsed time 0:00:06.490400
Lowering submodule _run_on_acc_0 elapsed time 0:00:06.490400
Now lowering submodule _run_on_acc_2
Now lowering submodule _run_on_acc_2
split_name=_run_on_acc_2, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_2, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.002721
TRT INetwork construction elapsed time: 0:00:00.002721
2023-03-27 05:20:13.327
Build TRT engine elapsed time: 0:00:19.097724
Build TRT engine elapsed time: 0:00:19.097724
Lowering submodule _run_on_acc_2 elapsed time 0:00:19.127441
Lowering submodule _run_on_acc_2 elapsed time 0:00:19.127441
Now lowering submodule _run_on_acc_4
Now lowering submodule _run_on_acc_4
split_name=_run_on_acc_4, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_4, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.000997
TRT INetwork construction elapsed time: 0:00:00.000997
Build TRT engine elapsed time: 0:00:01.871680
Build TRT engine elapsed time: 0:00:01.871680
Lowering submodule _run_on_acc_4 elapsed time 0:00:01.900656
Lowering submodule _run_on_acc_4 elapsed time 0:00:01.900656
Now lowering submodule _run_on_acc_6
Now lowering submodule _run_on_acc_6
split_name=_run_on_acc_6, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_6, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.000997
TRT INetwork construction elapsed time: 0:00:00.000997
Build TRT engine elapsed time: 0:00:01.830022
Build TRT engine elapsed time: 0:00:01.830022
Lowering submodule _run_on_acc_6 elapsed time 0:00:01.859147
Lowering submodule _run_on_acc_6 elapsed time 0:00:01.859147
Now lowering submodule _run_on_acc_8
Now lowering submodule _run_on_acc_8
split_name=_run_on_acc_8, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_8, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.003000
TRT INetwork construction elapsed time: 0:00:00.003000
2023-03-27 05:20:19.471
Build TRT engine elapsed time: 0:00:02.317220
Build TRT engine elapsed time: 0:00:02.317220
Lowering submodule _run_on_acc_8 elapsed time 0:00:02.347223
Lowering submodule _run_on_acc_8 elapsed time 0:00:02.347223
Now lowering submodule _run_on_acc_10
Now lowering submodule _run_on_acc_10
split_name=_run_on_acc_10, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_10, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.000999
TRT INetwork construction elapsed time: 0:00:00.000999
Build TRT engine elapsed time: 0:00:01.354626
Build TRT engine elapsed time: 0:00:01.354626
Lowering submodule _run_on_acc_10 elapsed time 0:00:01.382135
Lowering submodule _run_on_acc_10 elapsed time 0:00:01.382135
Now lowering submodule _run_on_acc_12
Now lowering submodule _run_on_acc_12
split_name=_run_on_acc_12, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_12, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.001000
TRT INetwork construction elapsed time: 0:00:00.001000
Build TRT engine elapsed time: 0:00:01.378106
Build TRT engine elapsed time: 0:00:01.378106
Lowering submodule _run_on_acc_12 elapsed time 0:00:01.407108
Lowering submodule _run_on_acc_12 elapsed time 0:00:01.407108
Now lowering submodule _run_on_acc_14
Now lowering submodule _run_on_acc_14
split_name=_run_on_acc_14, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_14, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.000999
TRT INetwork construction elapsed time: 0:00:00.000999
Build TRT engine elapsed time: 0:00:01.380241
Build TRT engine elapsed time: 0:00:01.380241
Lowering submodule _run_on_acc_14 elapsed time 0:00:01.410777
Lowering submodule _run_on_acc_14 elapsed time 0:00:01.410777
Now lowering submodule _run_on_acc_16
Now lowering submodule _run_on_acc_16
split_name=_run_on_acc_16, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_16, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.003001
TRT INetwork construction elapsed time: 0:00:00.003001
Build TRT engine elapsed time: 0:00:01.396178
Build TRT engine elapsed time: 0:00:01.396178
Lowering submodule _run_on_acc_16 elapsed time 0:00:01.426690
Lowering submodule _run_on_acc_16 elapsed time 0:00:01.426690
Now lowering submodule _run_on_acc_18
Now lowering submodule _run_on_acc_18
split_name=_run_on_acc_18, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 3840]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_18, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 3840]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
2023-03-27 05:20:25.185
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
2023-03-27 05:20:25.191
TRT INetwork construction elapsed time: 0:00:00.010184
TRT INetwork construction elapsed time: 0:00:00.010184
2023-03-27 05:20:43.247
Build TRT engine elapsed time: 0:00:18.049577
Build TRT engine elapsed time: 0:00:18.049577
Lowering submodule _run_on_acc_18 elapsed time 0:00:18.085758
Lowering submodule _run_on_acc_18 elapsed time 0:00:18.085758
Now lowering submodule _run_on_acc_20
Now lowering submodule _run_on_acc_20
split_name=_run_on_acc_20, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_20, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_144 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_144 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_145 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_145 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_146 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_146 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_147 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_147 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.033131
TRT INetwork construction elapsed time: 0:00:00.033131
Build TRT engine elapsed time: 0:00:00.472221
Build TRT engine elapsed time: 0:00:00.472221
Lowering submodule _run_on_acc_20 elapsed time 0:00:00.534362
Lowering submodule _run_on_acc_20 elapsed time 0:00:00.534362
Now lowering submodule _run_on_acc_22
Now lowering submodule _run_on_acc_22
split_name=_run_on_acc_22, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_22, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_148 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_148 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_149 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_149 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.021358
TRT INetwork construction elapsed time: 0:00:00.021358
2023-03-27 05:20:48.278
Build TRT engine elapsed time: 0:00:04.422826
Build TRT engine elapsed time: 0:00:04.422826
Lowering submodule _run_on_acc_22 elapsed time 0:00:04.474202
Lowering submodule _run_on_acc_22 elapsed time 0:00:04.474202
Now lowering submodule _run_on_acc_24
Now lowering submodule _run_on_acc_24
split_name=_run_on_acc_24, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_24, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
2023-03-27 05:20:48.317
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
2023-03-27 05:20:48.335
TRT INetwork construction elapsed time: 0:00:00.019704
TRT INetwork construction elapsed time: 0:00:00.019704
2023-03-27 05:20:58.364
Build TRT engine elapsed time: 0:00:10.021728
Build TRT engine elapsed time: 0:00:10.021728
Lowering submodule _run_on_acc_24 elapsed time 0:00:10.068910
Lowering submodule _run_on_acc_24 elapsed time 0:00:10.068910
Now lowering submodule _run_on_acc_26
Now lowering submodule _run_on_acc_26
split_name=_run_on_acc_26, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_26, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_150 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_150 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_151 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_151 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_152 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_152 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_153 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_153 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.031840
TRT INetwork construction elapsed time: 0:00:00.031840
Build TRT engine elapsed time: 0:00:00.403447
Build TRT engine elapsed time: 0:00:00.403447
Lowering submodule _run_on_acc_26 elapsed time 0:00:00.465066
Lowering submodule _run_on_acc_26 elapsed time 0:00:00.465066
Now lowering submodule _run_on_acc_28
Now lowering submodule _run_on_acc_28
split_name=_run_on_acc_28, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_28, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_154 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_154 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_155 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_155 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.020004
TRT INetwork construction elapsed time: 0:00:00.020004
2023-03-27 05:21:03.309
Build TRT engine elapsed time: 0:00:04.404170
Build TRT engine elapsed time: 0:00:04.404170
Lowering submodule _run_on_acc_28 elapsed time 0:00:04.457413
Lowering submodule _run_on_acc_28 elapsed time 0:00:04.457413
Now lowering submodule _run_on_acc_30
Now lowering submodule _run_on_acc_30
split_name=_run_on_acc_30, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_30, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
2023-03-27 05:21:03.352
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
2023-03-27 05:21:03.370
TRT INetwork construction elapsed time: 0:00:00.019124
TRT INetwork construction elapsed time: 0:00:00.019124
2023-03-27 05:21:13.399
Build TRT engine elapsed time: 0:00:10.022734
Build TRT engine elapsed time: 0:00:10.022734
Lowering submodule _run_on_acc_30 elapsed time 0:00:10.072859
Lowering submodule _run_on_acc_30 elapsed time 0:00:10.072859
Now lowering submodule _run_on_acc_32
Now lowering submodule _run_on_acc_32
split_name=_run_on_acc_32, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_32, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_156 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_156 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_157 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_157 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_158 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_158 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_159 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_159 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.033510
TRT INetwork construction elapsed time: 0:00:00.033510
Build TRT engine elapsed time: 0:00:00.410851
Build TRT engine elapsed time: 0:00:00.410851
Lowering submodule _run_on_acc_32 elapsed time 0:00:00.474359
Lowering submodule _run_on_acc_32 elapsed time 0:00:00.474359
Now lowering submodule _run_on_acc_34
Now lowering submodule _run_on_acc_34
split_name=_run_on_acc_34, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_34, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_160 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_160 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_161 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_161 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.021186
TRT INetwork construction elapsed time: 0:00:00.021186
2023-03-27 05:21:18.137
Build TRT engine elapsed time: 0:00:04.182155
Build TRT engine elapsed time: 0:00:04.182155
Lowering submodule _run_on_acc_34 elapsed time 0:00:04.237904
Lowering submodule _run_on_acc_34 elapsed time 0:00:04.237904
Now lowering submodule _run_on_acc_36
Now lowering submodule _run_on_acc_36
split_name=_run_on_acc_36, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_36, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
2023-03-27 05:21:18.181
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
2023-03-27 05:21:18.199
TRT INetwork construction elapsed time: 0:00:00.018492
TRT INetwork construction elapsed time: 0:00:00.018492
2023-03-27 05:21:28.173
Build TRT engine elapsed time: 0:00:09.965197
Build TRT engine elapsed time: 0:00:09.965197
Lowering submodule _run_on_acc_36 elapsed time 0:00:10.016874
Lowering submodule _run_on_acc_36 elapsed time 0:00:10.016874
Now lowering submodule _run_on_acc_38
Now lowering submodule _run_on_acc_38
split_name=_run_on_acc_38, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_38, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_162 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_162 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_163 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_163 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_164 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_164 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_165 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_165 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.035506
TRT INetwork construction elapsed time: 0:00:00.035506
Build TRT engine elapsed time: 0:00:00.409342
Build TRT engine elapsed time: 0:00:00.409342
Lowering submodule _run_on_acc_38 elapsed time 0:00:00.477063
Lowering submodule _run_on_acc_38 elapsed time 0:00:00.477063
Now lowering submodule _run_on_acc_40
Now lowering submodule _run_on_acc_40
split_name=_run_on_acc_40, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_40, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_166 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_166 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_167 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_167 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.021001
TRT INetwork construction elapsed time: 0:00:00.021001
2023-03-27 05:21:32.882
Build TRT engine elapsed time: 0:00:04.148708
Build TRT engine elapsed time: 0:00:04.148708
Lowering submodule _run_on_acc_40 elapsed time 0:00:04.206795
Lowering submodule _run_on_acc_40 elapsed time 0:00:04.206795
Now lowering submodule _run_on_acc_42
Now lowering submodule _run_on_acc_42
split_name=_run_on_acc_42, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_42, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
2023-03-27 05:21:32.927
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
2023-03-27 05:21:32.946
TRT INetwork construction elapsed time: 0:00:00.019506
TRT INetwork construction elapsed time: 0:00:00.019506
2023-03-27 05:21:42.970
Build TRT engine elapsed time: 0:00:10.017006
Build TRT engine elapsed time: 0:00:10.017006
Lowering submodule _run_on_acc_42 elapsed time 0:00:10.069570
Lowering submodule _run_on_acc_42 elapsed time 0:00:10.069570
Now lowering submodule _run_on_acc_44
Now lowering submodule _run_on_acc_44
split_name=_run_on_acc_44, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_44, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_168 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_168 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_169 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_169 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_170 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_170 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_171 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_171 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.034506
TRT INetwork construction elapsed time: 0:00:00.034506
Build TRT engine elapsed time: 0:00:00.427618
Build TRT engine elapsed time: 0:00:00.427618
Lowering submodule _run_on_acc_44 elapsed time 0:00:00.496209
Lowering submodule _run_on_acc_44 elapsed time 0:00:00.496209
Now lowering submodule _run_on_acc_46
Now lowering submodule _run_on_acc_46
split_name=_run_on_acc_46, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_46, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_172 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_172 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_173 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_173 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.021894
TRT INetwork construction elapsed time: 0:00:00.021894
2023-03-27 05:21:47.799
Build TRT engine elapsed time: 0:00:04.246928
Build TRT engine elapsed time: 0:00:04.246928
Lowering submodule _run_on_acc_46 elapsed time 0:00:04.305896
Lowering submodule _run_on_acc_46 elapsed time 0:00:04.305896
Now lowering submodule _run_on_acc_48
Now lowering submodule _run_on_acc_48
split_name=_run_on_acc_48, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_48, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
2023-03-27 05:21:47.845
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
2023-03-27 05:21:47.865
TRT INetwork construction elapsed time: 0:00:00.020405
TRT INetwork construction elapsed time: 0:00:00.020405
2023-03-27 05:21:58.114
Build TRT engine elapsed time: 0:00:10.241350
Build TRT engine elapsed time: 0:00:10.241350
Lowering submodule _run_on_acc_48 elapsed time 0:00:10.294995
Lowering submodule _run_on_acc_48 elapsed time 0:00:10.294995
Now lowering submodule _run_on_acc_50
Now lowering submodule _run_on_acc_50
split_name=_run_on_acc_50, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_50, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_174 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_174 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_175 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_175 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_176 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_176 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_177 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_177 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.035008
TRT INetwork construction elapsed time: 0:00:00.035008
Build TRT engine elapsed time: 0:00:00.426671
Build TRT engine elapsed time: 0:00:00.426671
Lowering submodule _run_on_acc_50 elapsed time 0:00:00.496741
Lowering submodule _run_on_acc_50 elapsed time 0:00:00.496741
Now lowering submodule _run_on_acc_52
Now lowering submodule _run_on_acc_52
split_name=_run_on_acc_52, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_52, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_178 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_178 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_179 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_179 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.022005
TRT INetwork construction elapsed time: 0:00:00.022005
2023-03-27 05:22:02.946
Build TRT engine elapsed time: 0:00:04.249548
Build TRT engine elapsed time: 0:00:04.249548
Lowering submodule _run_on_acc_52 elapsed time 0:00:04.307900
Lowering submodule _run_on_acc_52 elapsed time 0:00:04.307900
Now lowering submodule _run_on_acc_54
Now lowering submodule _run_on_acc_54
split_name=_run_on_acc_54, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_54, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
2023-03-27 05:22:02.991
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
2023-03-27 05:22:03.002
TRT INetwork construction elapsed time: 0:00:00.011003
TRT INetwork construction elapsed time: 0:00:00.011003
2023-03-27 05:22:13.196
Build TRT engine elapsed time: 0:00:10.185831
Build TRT engine elapsed time: 0:00:10.185831
Lowering submodule _run_on_acc_54 elapsed time 0:00:10.230870
Lowering submodule _run_on_acc_54 elapsed time 0:00:10.230870
Now lowering submodule _run_on_acc_56
Now lowering submodule _run_on_acc_56
split_name=_run_on_acc_56, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_56, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
2023-03-27 05:22:13.238
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
2023-03-27 05:22:13.247
TRT INetwork construction elapsed time: 0:00:00.010002
TRT INetwork construction elapsed time: 0:00:00.010002
2023-03-27 05:22:16.968
Build TRT engine elapsed time: 0:00:03.713161
Build TRT engine elapsed time: 0:00:03.713161
Lowering submodule _run_on_acc_56 elapsed time 0:00:03.757133
Lowering submodule _run_on_acc_56 elapsed time 0:00:03.757133
Now lowering submodule _run_on_acc_58
Now lowering submodule _run_on_acc_58
split_name=_run_on_acc_58, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_58, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_180 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_180 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_181 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_181 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_182 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_182 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_183 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_183 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.036506
TRT INetwork construction elapsed time: 0:00:00.036506
Build TRT engine elapsed time: 0:00:00.424042
Build TRT engine elapsed time: 0:00:00.424042
Lowering submodule _run_on_acc_58 elapsed time 0:00:00.494137
Lowering submodule _run_on_acc_58 elapsed time 0:00:00.494137
Now lowering submodule _run_on_acc_60
Now lowering submodule _run_on_acc_60
split_name=_run_on_acc_60, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_60, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_184 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_184 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_185 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_185 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.021507
TRT INetwork construction elapsed time: 0:00:00.021507
2023-03-27 05:22:21.761
Build TRT engine elapsed time: 0:00:04.211298
Build TRT engine elapsed time: 0:00:04.211298
Lowering submodule _run_on_acc_60 elapsed time 0:00:04.271451
Lowering submodule _run_on_acc_60 elapsed time 0:00:04.271451
Now lowering submodule _run_on_acc_62
Now lowering submodule _run_on_acc_62
split_name=_run_on_acc_62, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_62, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
2023-03-27 05:22:21.808
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
2023-03-27 05:22:21.828
TRT INetwork construction elapsed time: 0:00:00.020001
TRT INetwork construction elapsed time: 0:00:00.020001
2023-03-27 05:22:31.858
Build TRT engine elapsed time: 0:00:10.021450
Build TRT engine elapsed time: 0:00:10.021450
Lowering submodule _run_on_acc_62 elapsed time 0:00:10.077401
Lowering submodule _run_on_acc_62 elapsed time 0:00:10.077401
Now lowering submodule _run_on_acc_64
Now lowering submodule _run_on_acc_64
split_name=_run_on_acc_64, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_64, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_186 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_186 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_187 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_187 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_188 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_188 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_189 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_189 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.036505
TRT INetwork construction elapsed time: 0:00:00.036505
Build TRT engine elapsed time: 0:00:00.421742
Build TRT engine elapsed time: 0:00:00.421742
Lowering submodule _run_on_acc_64 elapsed time 0:00:00.493351
Lowering submodule _run_on_acc_64 elapsed time 0:00:00.493351
Now lowering submodule _run_on_acc_66
Now lowering submodule _run_on_acc_66
split_name=_run_on_acc_66, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_66, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_190 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_190 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_191 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_191 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.021504
TRT INetwork construction elapsed time: 0:00:00.021504
2023-03-27 05:22:36.642
Build TRT engine elapsed time: 0:00:04.201128
Build TRT engine elapsed time: 0:00:04.201128
Lowering submodule _run_on_acc_66 elapsed time 0:00:04.260797
Lowering submodule _run_on_acc_66 elapsed time 0:00:04.260797
Now lowering submodule _run_on_acc_68
Now lowering submodule _run_on_acc_68
split_name=_run_on_acc_68, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_68, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
2023-03-27 05:22:36.689
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
2023-03-27 05:22:36.708
TRT INetwork construction elapsed time: 0:00:00.020005
TRT INetwork construction elapsed time: 0:00:00.020005
2023-03-27 05:22:46.710
Build TRT engine elapsed time: 0:00:09.994147
Build TRT engine elapsed time: 0:00:09.994147
Lowering submodule _run_on_acc_68 elapsed time 0:00:10.048280
Lowering submodule _run_on_acc_68 elapsed time 0:00:10.048280
Now lowering submodule _run_on_acc_70
Now lowering submodule _run_on_acc_70
split_name=_run_on_acc_70, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_70, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_192 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_192 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_193 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_193 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_194 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_194 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_195 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_195 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.037005
TRT INetwork construction elapsed time: 0:00:00.037005
Build TRT engine elapsed time: 0:00:00.435881
Build TRT engine elapsed time: 0:00:00.435881
Lowering submodule _run_on_acc_70 elapsed time 0:00:00.507967
Lowering submodule _run_on_acc_70 elapsed time 0:00:00.507967
Now lowering submodule _run_on_acc_72
Now lowering submodule _run_on_acc_72
split_name=_run_on_acc_72, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_72, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_196 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_196 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_197 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_197 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.022005
TRT INetwork construction elapsed time: 0:00:00.022005
2023-03-27 05:22:51.571
Build TRT engine elapsed time: 0:00:04.264076
Build TRT engine elapsed time: 0:00:04.264076
Lowering submodule _run_on_acc_72 elapsed time 0:00:04.324360
Lowering submodule _run_on_acc_72 elapsed time 0:00:04.324360
Now lowering submodule _run_on_acc_74
Now lowering submodule _run_on_acc_74
split_name=_run_on_acc_74, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_74, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
2023-03-27 05:22:51.618
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
2023-03-27 05:22:51.637
TRT INetwork construction elapsed time: 0:00:00.020490
TRT INetwork construction elapsed time: 0:00:00.020490
2023-03-27 05:23:01.792
Build TRT engine elapsed time: 0:00:10.146359
Build TRT engine elapsed time: 0:00:10.146359
Lowering submodule _run_on_acc_74 elapsed time 0:00:10.201391
Lowering submodule _run_on_acc_74 elapsed time 0:00:10.201391
Now lowering submodule _run_on_acc_76
Now lowering submodule _run_on_acc_76
split_name=_run_on_acc_76, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_76, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_198 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_198 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_199 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_199 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_200 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_200 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_201 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_201 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.037009
TRT INetwork construction elapsed time: 0:00:00.037009
Build TRT engine elapsed time: 0:00:00.419433
Build TRT engine elapsed time: 0:00:00.419433
Lowering submodule _run_on_acc_76 elapsed time 0:00:00.491929
Lowering submodule _run_on_acc_76 elapsed time 0:00:00.491929
Now lowering submodule _run_on_acc_78
Now lowering submodule _run_on_acc_78
split_name=_run_on_acc_78, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_78, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_202 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_202 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_203 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_203 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.022006
TRT INetwork construction elapsed time: 0:00:00.022006
2023-03-27 05:23:06.598
Build TRT engine elapsed time: 0:00:04.224634
Build TRT engine elapsed time: 0:00:04.224634
Lowering submodule _run_on_acc_78 elapsed time 0:00:04.284197
Lowering submodule _run_on_acc_78 elapsed time 0:00:04.284197
Now lowering submodule _run_on_acc_80
Now lowering submodule _run_on_acc_80
split_name=_run_on_acc_80, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_80, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
2023-03-27 05:23:06.644
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
2023-03-27 05:23:06.664
TRT INetwork construction elapsed time: 0:00:00.019939
TRT INetwork construction elapsed time: 0:00:00.019939
2023-03-27 05:23:16.660
Build TRT engine elapsed time: 0:00:09.987888
Build TRT engine elapsed time: 0:00:09.987888
Lowering submodule _run_on_acc_80 elapsed time 0:00:10.041882
Lowering submodule _run_on_acc_80 elapsed time 0:00:10.041882
Now lowering submodule _run_on_acc_82
Now lowering submodule _run_on_acc_82
split_name=_run_on_acc_82, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_82, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_204 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_204 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_205 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_205 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_206 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_206 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_207 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_207 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.036506
TRT INetwork construction elapsed time: 0:00:00.036506
Build TRT engine elapsed time: 0:00:00.433769
Build TRT engine elapsed time: 0:00:00.433769
Lowering submodule _run_on_acc_82 elapsed time 0:00:00.504658
Lowering submodule _run_on_acc_82 elapsed time 0:00:00.504658
Now lowering submodule _run_on_acc_84
Now lowering submodule _run_on_acc_84
split_name=_run_on_acc_84, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_84, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_208 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_208 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_209 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_209 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.022474
TRT INetwork construction elapsed time: 0:00:00.022474
2023-03-27 05:23:21.530
Build TRT engine elapsed time: 0:00:04.274198
Build TRT engine elapsed time: 0:00:04.274198
Lowering submodule _run_on_acc_84 elapsed time 0:00:04.336163
Lowering submodule _run_on_acc_84 elapsed time 0:00:04.336163
Now lowering submodule _run_on_acc_86
Now lowering submodule _run_on_acc_86
split_name=_run_on_acc_86, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_86, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
2023-03-27 05:23:21.577
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
2023-03-27 05:23:21.597
TRT INetwork construction elapsed time: 0:00:00.020005
TRT INetwork construction elapsed time: 0:00:00.020005
2023-03-27 05:23:31.919
Build TRT engine elapsed time: 0:00:10.314745
Build TRT engine elapsed time: 0:00:10.314745
Lowering submodule _run_on_acc_86 elapsed time 0:00:10.369284
Lowering submodule _run_on_acc_86 elapsed time 0:00:10.369284
Now lowering submodule _run_on_acc_88
Now lowering submodule _run_on_acc_88
split_name=_run_on_acc_88, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_88, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_210 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_210 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_211 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_211 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_212 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_212 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_213 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_213 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.038002
TRT INetwork construction elapsed time: 0:00:00.038002
Build TRT engine elapsed time: 0:00:00.421442
Build TRT engine elapsed time: 0:00:00.421442
Lowering submodule _run_on_acc_88 elapsed time 0:00:00.493725
Lowering submodule _run_on_acc_88 elapsed time 0:00:00.493725
Now lowering submodule _run_on_acc_90
Now lowering submodule _run_on_acc_90
split_name=_run_on_acc_90, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_90, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_214 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_214 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_215 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_215 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.022619
TRT INetwork construction elapsed time: 0:00:00.022619
2023-03-27 05:23:36.797
Build TRT engine elapsed time: 0:00:04.292359
Build TRT engine elapsed time: 0:00:04.292359
Lowering submodule _run_on_acc_90 elapsed time 0:00:04.353155
Lowering submodule _run_on_acc_90 elapsed time 0:00:04.353155
Now lowering submodule _run_on_acc_92
Now lowering submodule _run_on_acc_92
split_name=_run_on_acc_92, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_92, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
2023-03-27 05:23:36.843
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
2023-03-27 05:23:36.855
TRT INetwork construction elapsed time: 0:00:00.012449
TRT INetwork construction elapsed time: 0:00:00.012449
2023-03-27 05:23:47.226
Build TRT engine elapsed time: 0:00:10.362828
Build TRT engine elapsed time: 0:00:10.362828
Lowering submodule _run_on_acc_92 elapsed time 0:00:10.409648
Lowering submodule _run_on_acc_92 elapsed time 0:00:10.409648
Now lowering submodule _run_on_acc_94
Now lowering submodule _run_on_acc_94
split_name=_run_on_acc_94, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_94, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
2023-03-27 05:23:47.270
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
2023-03-27 05:23:47.280
TRT INetwork construction elapsed time: 0:00:00.012002
TRT INetwork construction elapsed time: 0:00:00.012002
2023-03-27 05:23:51.052
Build TRT engine elapsed time: 0:00:03.763718
Build TRT engine elapsed time: 0:00:03.763718
Lowering submodule _run_on_acc_94 elapsed time 0:00:03.808704
Lowering submodule _run_on_acc_94 elapsed time 0:00:03.808704
Now lowering submodule _run_on_acc_96
Now lowering submodule _run_on_acc_96
split_name=_run_on_acc_96, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_96, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_216 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_216 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_217 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_217 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_218 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_218 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_219 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_219 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.037003
TRT INetwork construction elapsed time: 0:00:00.037003
Build TRT engine elapsed time: 0:00:00.428686
Build TRT engine elapsed time: 0:00:00.428686
Lowering submodule _run_on_acc_96 elapsed time 0:00:00.500206
Lowering submodule _run_on_acc_96 elapsed time 0:00:00.500206
Now lowering submodule _run_on_acc_98
Now lowering submodule _run_on_acc_98
split_name=_run_on_acc_98, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_98, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_220 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_220 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_221 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_221 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.021504
TRT INetwork construction elapsed time: 0:00:00.021504
2023-03-27 05:23:55.944
Build TRT engine elapsed time: 0:00:04.304132
Build TRT engine elapsed time: 0:00:04.304132
Lowering submodule _run_on_acc_98 elapsed time 0:00:04.363832
Lowering submodule _run_on_acc_98 elapsed time 0:00:04.363832
Now lowering submodule _run_on_acc_100
Now lowering submodule _run_on_acc_100
split_name=_run_on_acc_100, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_100, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
2023-03-27 05:23:55.990
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
2023-03-27 05:23:56.010
TRT INetwork construction elapsed time: 0:00:00.019997
TRT INetwork construction elapsed time: 0:00:00.019997
2023-03-27 05:24:06.075
Build TRT engine elapsed time: 0:00:10.056246
Build TRT engine elapsed time: 0:00:10.056246
Lowering submodule _run_on_acc_100 elapsed time 0:00:10.110658
Lowering submodule _run_on_acc_100 elapsed time 0:00:10.110658
Now lowering submodule _run_on_acc_102
Now lowering submodule _run_on_acc_102
split_name=_run_on_acc_102, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_102, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_222 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_222 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_223 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_223 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_224 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_224 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_225 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_225 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.036510
TRT INetwork construction elapsed time: 0:00:00.036510
Build TRT engine elapsed time: 0:00:00.433247
Build TRT engine elapsed time: 0:00:00.433247
Lowering submodule _run_on_acc_102 elapsed time 0:00:00.505180
Lowering submodule _run_on_acc_102 elapsed time 0:00:00.505180
Now lowering submodule _run_on_acc_104
Now lowering submodule _run_on_acc_104
split_name=_run_on_acc_104, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_104, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_226 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_226 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_227 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_227 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.022015
TRT INetwork construction elapsed time: 0:00:00.022015
2023-03-27 05:24:10.928
Build TRT engine elapsed time: 0:00:04.258981
Build TRT engine elapsed time: 0:00:04.258981
Lowering submodule _run_on_acc_104 elapsed time 0:00:04.319536
Lowering submodule _run_on_acc_104 elapsed time 0:00:04.319536
Now lowering submodule _run_on_acc_106
Now lowering submodule _run_on_acc_106
split_name=_run_on_acc_106, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_106, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
2023-03-27 05:24:10.974
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
2023-03-27 05:24:10.994
TRT INetwork construction elapsed time: 0:00:00.020005
TRT INetwork construction elapsed time: 0:00:00.020005
2023-03-27 05:24:21.030
Build TRT engine elapsed time: 0:00:10.027874
Build TRT engine elapsed time: 0:00:10.027874
Lowering submodule _run_on_acc_106 elapsed time 0:00:10.082504
Lowering submodule _run_on_acc_106 elapsed time 0:00:10.082504
Now lowering submodule _run_on_acc_108
Now lowering submodule _run_on_acc_108
split_name=_run_on_acc_108, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_108, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_228 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_228 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_229 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_229 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_230 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_230 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_231 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_231 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.037008
TRT INetwork construction elapsed time: 0:00:00.037008
Build TRT engine elapsed time: 0:00:00.423912
Build TRT engine elapsed time: 0:00:00.423912
Lowering submodule _run_on_acc_108 elapsed time 0:00:00.495086
Lowering submodule _run_on_acc_108 elapsed time 0:00:00.495086
Now lowering submodule _run_on_acc_110
Now lowering submodule _run_on_acc_110
split_name=_run_on_acc_110, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_110, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_232 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_232 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_233 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_233 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.021300
TRT INetwork construction elapsed time: 0:00:00.021300
2023-03-27 05:24:25.911
Build TRT engine elapsed time: 0:00:04.294626
Build TRT engine elapsed time: 0:00:04.294626
Lowering submodule _run_on_acc_110 elapsed time 0:00:04.355888
Lowering submodule _run_on_acc_110 elapsed time 0:00:04.355888
Now lowering submodule _run_on_acc_112
Now lowering submodule _run_on_acc_112
split_name=_run_on_acc_112, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_112, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
2023-03-27 05:24:25.958
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
2023-03-27 05:24:25.978
TRT INetwork construction elapsed time: 0:00:00.021005
TRT INetwork construction elapsed time: 0:00:00.021005
2023-03-27 05:24:36.012
Build TRT engine elapsed time: 0:00:10.025766
Build TRT engine elapsed time: 0:00:10.025766
Lowering submodule _run_on_acc_112 elapsed time 0:00:10.080759
Lowering submodule _run_on_acc_112 elapsed time 0:00:10.080759
Now lowering submodule _run_on_acc_114
Now lowering submodule _run_on_acc_114
split_name=_run_on_acc_114, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_114, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_234 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_234 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_235 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_235 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_236 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_236 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_237 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_237 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.037007
TRT INetwork construction elapsed time: 0:00:00.037007
Build TRT engine elapsed time: 0:00:00.416450
Build TRT engine elapsed time: 0:00:00.416450
Lowering submodule _run_on_acc_114 elapsed time 0:00:00.487694
Lowering submodule _run_on_acc_114 elapsed time 0:00:00.487694
Now lowering submodule _run_on_acc_116
Now lowering submodule _run_on_acc_116
split_name=_run_on_acc_116, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_116, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_238 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_238 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_239 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_239 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.022006
TRT INetwork construction elapsed time: 0:00:00.022006
2023-03-27 05:24:40.836
Build TRT engine elapsed time: 0:00:04.247610
Build TRT engine elapsed time: 0:00:04.247610
Lowering submodule _run_on_acc_116 elapsed time 0:00:04.308578
Lowering submodule _run_on_acc_116 elapsed time 0:00:04.308578
Now lowering submodule _run_on_acc_118
Now lowering submodule _run_on_acc_118
split_name=_run_on_acc_118, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_118, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
2023-03-27 05:24:40.884
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
2023-03-27 05:24:40.904
TRT INetwork construction elapsed time: 0:00:00.020048
TRT INetwork construction elapsed time: 0:00:00.020048
2023-03-27 05:24:50.877
Build TRT engine elapsed time: 0:00:09.964763
Build TRT engine elapsed time: 0:00:09.964763
Lowering submodule _run_on_acc_118 elapsed time 0:00:10.021839
Lowering submodule _run_on_acc_118 elapsed time 0:00:10.021839
Now lowering submodule _run_on_acc_120
Now lowering submodule _run_on_acc_120
split_name=_run_on_acc_120, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_120, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_240 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_240 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_241 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_241 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_242 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_242 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_243 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_243 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.037499
TRT INetwork construction elapsed time: 0:00:00.037499
Build TRT engine elapsed time: 0:00:00.431422
Build TRT engine elapsed time: 0:00:00.431422
Lowering submodule _run_on_acc_120 elapsed time 0:00:00.503167
Lowering submodule _run_on_acc_120 elapsed time 0:00:00.503167
Now lowering submodule _run_on_acc_122
Now lowering submodule _run_on_acc_122
split_name=_run_on_acc_122, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_122, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_244 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_244 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_245 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_245 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.022005
TRT INetwork construction elapsed time: 0:00:00.022005
2023-03-27 05:24:55.738
Build TRT engine elapsed time: 0:00:04.267363
Build TRT engine elapsed time: 0:00:04.267363
Lowering submodule _run_on_acc_122 elapsed time 0:00:04.327424
Lowering submodule _run_on_acc_122 elapsed time 0:00:04.327424
Now lowering submodule _run_on_acc_124
Now lowering submodule _run_on_acc_124
split_name=_run_on_acc_124, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_124, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
2023-03-27 05:24:55.784
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
2023-03-27 05:24:55.804
TRT INetwork construction elapsed time: 0:00:00.020000
TRT INetwork construction elapsed time: 0:00:00.020000
2023-03-27 05:25:05.842
Build TRT engine elapsed time: 0:00:10.029499
Build TRT engine elapsed time: 0:00:10.029499
Lowering submodule _run_on_acc_124 elapsed time 0:00:10.084028
Lowering submodule _run_on_acc_124 elapsed time 0:00:10.084028
Now lowering submodule _run_on_acc_126
Now lowering submodule _run_on_acc_126
split_name=_run_on_acc_126, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_126, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_246 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_246 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_247 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_247 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_248 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_248 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_249 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_249 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.037984
TRT INetwork construction elapsed time: 0:00:00.037984
Build TRT engine elapsed time: 0:00:00.428732
Build TRT engine elapsed time: 0:00:00.428732
Lowering submodule _run_on_acc_126 elapsed time 0:00:00.502204
Lowering submodule _run_on_acc_126 elapsed time 0:00:00.502204
Now lowering submodule _run_on_acc_128
Now lowering submodule _run_on_acc_128
split_name=_run_on_acc_128, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_128, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_250 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_250 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_251 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_251 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.022002
TRT INetwork construction elapsed time: 0:00:00.022002
2023-03-27 05:25:10.696
Build TRT engine elapsed time: 0:00:04.261908
Build TRT engine elapsed time: 0:00:04.261908
Lowering submodule _run_on_acc_128 elapsed time 0:00:04.321579
Lowering submodule _run_on_acc_128 elapsed time 0:00:04.321579
Now lowering submodule _run_on_acc_130
Now lowering submodule _run_on_acc_130
split_name=_run_on_acc_130, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_130, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
2023-03-27 05:25:10.742
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
2023-03-27 05:25:10.753
TRT INetwork construction elapsed time: 0:00:00.012003
TRT INetwork construction elapsed time: 0:00:00.012003
2023-03-27 05:25:20.927
Build TRT engine elapsed time: 0:00:10.165734
Build TRT engine elapsed time: 0:00:10.165734
Lowering submodule _run_on_acc_130 elapsed time 0:00:10.211746
Lowering submodule _run_on_acc_130 elapsed time 0:00:10.211746
Now lowering submodule _run_on_acc_132
Now lowering submodule _run_on_acc_132
split_name=_run_on_acc_132, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_132, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
2023-03-27 05:25:20.972
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
2023-03-27 05:25:20.984
TRT INetwork construction elapsed time: 0:00:00.013004
TRT INetwork construction elapsed time: 0:00:00.013004
2023-03-27 05:25:24.712
Build TRT engine elapsed time: 0:00:03.719135
Build TRT engine elapsed time: 0:00:03.719135
Lowering submodule _run_on_acc_132 elapsed time 0:00:03.767137
Lowering submodule _run_on_acc_132 elapsed time 0:00:03.767137
Now lowering submodule _run_on_acc_134
Now lowering submodule _run_on_acc_134
split_name=_run_on_acc_134, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_134, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_252 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_252 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_253 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_253 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_254 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_254 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_255 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_255 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.038008
TRT INetwork construction elapsed time: 0:00:00.038008
Build TRT engine elapsed time: 0:00:00.443596
Build TRT engine elapsed time: 0:00:00.443596
Lowering submodule _run_on_acc_134 elapsed time 0:00:00.517405
Lowering submodule _run_on_acc_134 elapsed time 0:00:00.517405
Now lowering submodule _run_on_acc_136
Now lowering submodule _run_on_acc_136
split_name=_run_on_acc_136, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_136, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_256 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_256 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_257 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_257 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.022005
TRT INetwork construction elapsed time: 0:00:00.022005
2023-03-27 05:25:29.560
Build TRT engine elapsed time: 0:00:04.241779
Build TRT engine elapsed time: 0:00:04.241779
Lowering submodule _run_on_acc_136 elapsed time 0:00:04.302936
Lowering submodule _run_on_acc_136 elapsed time 0:00:04.302936
Now lowering submodule _run_on_acc_138
Now lowering submodule _run_on_acc_138
split_name=_run_on_acc_138, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_138, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
2023-03-27 05:25:29.607
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
2023-03-27 05:25:29.627
TRT INetwork construction elapsed time: 0:00:00.021001
TRT INetwork construction elapsed time: 0:00:00.021001
2023-03-27 05:25:39.760
Build TRT engine elapsed time: 0:00:10.125096
Build TRT engine elapsed time: 0:00:10.125096
Lowering submodule _run_on_acc_138 elapsed time 0:00:10.180046
Lowering submodule _run_on_acc_138 elapsed time 0:00:10.180046
Now lowering submodule _run_on_acc_140
Now lowering submodule _run_on_acc_140
split_name=_run_on_acc_140, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_140, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_258 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_258 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_259 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_259 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_260 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_260 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_261 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_261 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.036506
TRT INetwork construction elapsed time: 0:00:00.036506
Build TRT engine elapsed time: 0:00:00.428635
Build TRT engine elapsed time: 0:00:00.428635
Lowering submodule _run_on_acc_140 elapsed time 0:00:00.501459
Lowering submodule _run_on_acc_140 elapsed time 0:00:00.501459
Now lowering submodule _run_on_acc_142
Now lowering submodule _run_on_acc_142
split_name=_run_on_acc_142, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_142, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_262 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_262 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_263 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_263 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.022298
TRT INetwork construction elapsed time: 0:00:00.022298
2023-03-27 05:25:44.719
Build TRT engine elapsed time: 0:00:04.366112
Build TRT engine elapsed time: 0:00:04.366112
Lowering submodule _run_on_acc_142 elapsed time 0:00:04.427833
Lowering submodule _run_on_acc_142 elapsed time 0:00:04.427833
Now lowering submodule _run_on_acc_144
Now lowering submodule _run_on_acc_144
split_name=_run_on_acc_144, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_144, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
2023-03-27 05:25:44.767
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
2023-03-27 05:25:44.787
TRT INetwork construction elapsed time: 0:00:00.020982
TRT INetwork construction elapsed time: 0:00:00.020982
2023-03-27 05:25:54.843
Build TRT engine elapsed time: 0:00:10.047955
Build TRT engine elapsed time: 0:00:10.047955
Lowering submodule _run_on_acc_144 elapsed time 0:00:10.106318
Lowering submodule _run_on_acc_144 elapsed time 0:00:10.106318
Now lowering submodule _run_on_acc_146
Now lowering submodule _run_on_acc_146
split_name=_run_on_acc_146, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_146, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_264 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_264 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_265 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_265 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_266 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_266 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_267 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_267 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.037005
TRT INetwork construction elapsed time: 0:00:00.037005
Build TRT engine elapsed time: 0:00:00.431744
Build TRT engine elapsed time: 0:00:00.431744
Lowering submodule _run_on_acc_146 elapsed time 0:00:00.503690
Lowering submodule _run_on_acc_146 elapsed time 0:00:00.503690
Now lowering submodule _run_on_acc_148
Now lowering submodule _run_on_acc_148
split_name=_run_on_acc_148, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_148, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_268 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_268 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_269 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_269 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.022503
TRT INetwork construction elapsed time: 0:00:00.022503
2023-03-27 05:25:59.737
Build TRT engine elapsed time: 0:00:04.296204
Build TRT engine elapsed time: 0:00:04.296204
Lowering submodule _run_on_acc_148 elapsed time 0:00:04.356075
Lowering submodule _run_on_acc_148 elapsed time 0:00:04.356075
Now lowering submodule _run_on_acc_150
Now lowering submodule _run_on_acc_150
split_name=_run_on_acc_150, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_150, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
2023-03-27 05:25:59.783
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
2023-03-27 05:25:59.803
TRT INetwork construction elapsed time: 0:00:00.021011
TRT INetwork construction elapsed time: 0:00:00.021011
2023-03-27 05:26:09.847
Build TRT engine elapsed time: 0:00:10.035541
Build TRT engine elapsed time: 0:00:10.035541
Lowering submodule _run_on_acc_150 elapsed time 0:00:10.092231
Lowering submodule _run_on_acc_150 elapsed time 0:00:10.092231
Now lowering submodule _run_on_acc_152
Now lowering submodule _run_on_acc_152
split_name=_run_on_acc_152, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_152, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_270 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_270 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_271 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_271 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_272 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_272 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_273 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_273 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.038792
TRT INetwork construction elapsed time: 0:00:00.038792
Build TRT engine elapsed time: 0:00:00.436945
Build TRT engine elapsed time: 0:00:00.436945
Lowering submodule _run_on_acc_152 elapsed time 0:00:00.511085
Lowering submodule _run_on_acc_152 elapsed time 0:00:00.511085
Now lowering submodule _run_on_acc_154
Now lowering submodule _run_on_acc_154
split_name=_run_on_acc_154, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_154, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_274 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_274 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_275 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_275 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.023795
TRT INetwork construction elapsed time: 0:00:00.023795
2023-03-27 05:26:14.778
Build TRT engine elapsed time: 0:00:04.324144
Build TRT engine elapsed time: 0:00:04.324144
Lowering submodule _run_on_acc_154 elapsed time 0:00:04.387566
Lowering submodule _run_on_acc_154 elapsed time 0:00:04.387566
Now lowering submodule _run_on_acc_156
Now lowering submodule _run_on_acc_156
split_name=_run_on_acc_156, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_156, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
2023-03-27 05:26:14.825
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
2023-03-27 05:26:14.845
TRT INetwork construction elapsed time: 0:00:00.020507
TRT INetwork construction elapsed time: 0:00:00.020507
2023-03-27 05:26:24.844
Build TRT engine elapsed time: 0:00:09.991012
Build TRT engine elapsed time: 0:00:09.991012
Lowering submodule _run_on_acc_156 elapsed time 0:00:10.049578
Lowering submodule _run_on_acc_156 elapsed time 0:00:10.049578
Now lowering submodule _run_on_acc_158
Now lowering submodule _run_on_acc_158
split_name=_run_on_acc_158, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_158, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_276 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_276 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_277 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_277 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_278 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_278 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_279 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_279 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.037005
TRT INetwork construction elapsed time: 0:00:00.037005
Build TRT engine elapsed time: 0:00:00.423171
Build TRT engine elapsed time: 0:00:00.423171
Lowering submodule _run_on_acc_158 elapsed time 0:00:00.495677
Lowering submodule _run_on_acc_158 elapsed time 0:00:00.495677
Now lowering submodule _run_on_acc_160
Now lowering submodule _run_on_acc_160
split_name=_run_on_acc_160, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_160, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_280 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_280 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_281 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_281 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.021485
TRT INetwork construction elapsed time: 0:00:00.021485
2023-03-27 05:26:29.713
Build TRT engine elapsed time: 0:00:04.278824
Build TRT engine elapsed time: 0:00:04.278824
Lowering submodule _run_on_acc_160 elapsed time 0:00:04.340067
Lowering submodule _run_on_acc_160 elapsed time 0:00:04.340067
Now lowering submodule _run_on_acc_162
Now lowering submodule _run_on_acc_162
split_name=_run_on_acc_162, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_162, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
2023-03-27 05:26:29.760
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
2023-03-27 05:26:29.780
TRT INetwork construction elapsed time: 0:00:00.020991
TRT INetwork construction elapsed time: 0:00:00.020991
2023-03-27 05:26:39.806
Build TRT engine elapsed time: 0:00:10.018291
Build TRT engine elapsed time: 0:00:10.018291
Lowering submodule _run_on_acc_162 elapsed time 0:00:10.073605
Lowering submodule _run_on_acc_162 elapsed time 0:00:10.073605
Now lowering submodule _run_on_acc_164
Now lowering submodule _run_on_acc_164
split_name=_run_on_acc_164, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_164, input_specs=[InputTensorSpec(shape=torch.Size([1, 48, 80, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 48, 80, 1]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_282 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_282 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_283 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_283 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_284 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_284 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_285 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_285 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.038008
TRT INetwork construction elapsed time: 0:00:00.038008
Build TRT engine elapsed time: 0:00:00.424398
Build TRT engine elapsed time: 0:00:00.424398
Lowering submodule _run_on_acc_164 elapsed time 0:00:00.496390
Lowering submodule _run_on_acc_164 elapsed time 0:00:00.496390
Now lowering submodule _run_on_acc_166
Now lowering submodule _run_on_acc_166
split_name=_run_on_acc_166, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_166, input_specs=[InputTensorSpec(shape=torch.Size([60, 64, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([60, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 60, 1, 64, 64]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_286 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_286 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_287 are constant. In this case, please consider constant fold the model first.
warnings.warn(
I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\converters\converter_utils.py:457: UserWarning: Both operands of the binary elementwise op floordiv_287 are constant. In this case, please consider constant fold the model first.
warnings.warn(
TRT INetwork construction elapsed time: 0:00:00.023005
TRT INetwork construction elapsed time: 0:00:00.023005
2023-03-27 05:26:44.701
Build TRT engine elapsed time: 0:00:04.307120
Build TRT engine elapsed time: 0:00:04.307120
Lowering submodule _run_on_acc_166 elapsed time 0:00:04.368131
Lowering submodule _run_on_acc_166 elapsed time 0:00:04.368131
Now lowering submodule _run_on_acc_168
Now lowering submodule _run_on_acc_168
split_name=_run_on_acc_168, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_168, input_specs=[InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
2023-03-27 05:26:44.749
Unable to find layer norm plugin, fall back to TensorRT implementation.
Unable to find layer norm plugin, fall back to TensorRT implementation.
2023-03-27 05:26:44.760
TRT INetwork construction elapsed time: 0:00:00.012003
TRT INetwork construction elapsed time: 0:00:00.012003
2023-03-27 05:26:54.941
Build TRT engine elapsed time: 0:00:10.173015
Build TRT engine elapsed time: 0:00:10.173015
Lowering submodule _run_on_acc_168 elapsed time 0:00:10.220081
Lowering submodule _run_on_acc_168 elapsed time 0:00:10.220081
Now lowering submodule _run_on_acc_170
Now lowering submodule _run_on_acc_170
split_name=_run_on_acc_170, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_170, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 3840, 256]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.000999
TRT INetwork construction elapsed time: 0:00:00.000999
2023-03-27 05:26:57.108
Build TRT engine elapsed time: 0:00:02.113329
Build TRT engine elapsed time: 0:00:02.113329
Lowering submodule _run_on_acc_170 elapsed time 0:00:02.149360
Lowering submodule _run_on_acc_170 elapsed time 0:00:02.149360
Now lowering submodule _run_on_acc_172
Now lowering submodule _run_on_acc_172
split_name=_run_on_acc_172, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_172, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.002000
TRT INetwork construction elapsed time: 0:00:00.002000
Build TRT engine elapsed time: 0:00:01.918650
Build TRT engine elapsed time: 0:00:01.918650
Lowering submodule _run_on_acc_172 elapsed time 0:00:01.954914
Lowering submodule _run_on_acc_172 elapsed time 0:00:01.954914
Now lowering submodule _run_on_acc_174
Now lowering submodule _run_on_acc_174
split_name=_run_on_acc_174, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 512, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_174, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 512, 48, 80]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.003001
TRT INetwork construction elapsed time: 0:00:00.003001
2023-03-27 05:27:01.856
Build TRT engine elapsed time: 0:00:02.722756
Build TRT engine elapsed time: 0:00:02.722756
Lowering submodule _run_on_acc_174 elapsed time 0:00:02.760268
Lowering submodule _run_on_acc_174 elapsed time 0:00:02.760268
Now lowering submodule _run_on_acc_176
Now lowering submodule _run_on_acc_176
split_name=_run_on_acc_176, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_176, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.004177
TRT INetwork construction elapsed time: 0:00:00.004177
2023-03-27 05:27:04.652
Build TRT engine elapsed time: 0:00:02.740045
Build TRT engine elapsed time: 0:00:02.740045
Lowering submodule _run_on_acc_176 elapsed time 0:00:02.779477
Lowering submodule _run_on_acc_176 elapsed time 0:00:02.779477
Now lowering submodule _run_on_acc_178
Now lowering submodule _run_on_acc_178
split_name=_run_on_acc_178, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_178, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.003001
TRT INetwork construction elapsed time: 0:00:00.003001
2023-03-27 05:27:07.478
Build TRT engine elapsed time: 0:00:02.772769
Build TRT engine elapsed time: 0:00:02.772769
Lowering submodule _run_on_acc_178 elapsed time 0:00:02.809798
Lowering submodule _run_on_acc_178 elapsed time 0:00:02.809798
Now lowering submodule _run_on_acc_180
Now lowering submodule _run_on_acc_180
split_name=_run_on_acc_180, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_180, input_specs=[InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.003998
TRT INetwork construction elapsed time: 0:00:00.003998
2023-03-27 05:27:12.314
Build TRT engine elapsed time: 0:00:04.781494
Build TRT engine elapsed time: 0:00:04.781494
Lowering submodule _run_on_acc_180 elapsed time 0:00:04.821787
Lowering submodule _run_on_acc_180 elapsed time 0:00:04.821787
Now lowering submodule _run_on_acc_182
Now lowering submodule _run_on_acc_182
split_name=_run_on_acc_182, input_specs=[InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_182, input_specs=[InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 256, 96, 160]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.004128
TRT INetwork construction elapsed time: 0:00:00.004128
2023-03-27 05:27:40.716
Build TRT engine elapsed time: 0:00:28.343696
Build TRT engine elapsed time: 0:00:28.343696
Lowering submodule _run_on_acc_182 elapsed time 0:00:28.385848
Lowering submodule _run_on_acc_182 elapsed time 0:00:28.385848
Now lowering submodule _run_on_acc_184
Now lowering submodule _run_on_acc_184
split_name=_run_on_acc_184, input_specs=[InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_184, input_specs=[InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.004128
TRT INetwork construction elapsed time: 0:00:00.004128
2023-03-27 05:27:43.880
Build TRT engine elapsed time: 0:00:03.104550
Build TRT engine elapsed time: 0:00:03.104550
Lowering submodule _run_on_acc_184 elapsed time 0:00:03.146690
Lowering submodule _run_on_acc_184 elapsed time 0:00:03.146690
Now lowering submodule _run_on_acc_186
Now lowering submodule _run_on_acc_186
split_name=_run_on_acc_186, input_specs=[InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_186, input_specs=[InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.002000
TRT INetwork construction elapsed time: 0:00:00.002000
2023-03-27 05:27:46.898
Build TRT engine elapsed time: 0:00:02.965473
Build TRT engine elapsed time: 0:00:02.965473
Lowering submodule _run_on_acc_186 elapsed time 0:00:03.002562
Lowering submodule _run_on_acc_186 elapsed time 0:00:03.002562
Now lowering submodule _run_on_acc_188
Now lowering submodule _run_on_acc_188
split_name=_run_on_acc_188, input_specs=[InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_188, input_specs=[InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.001505
TRT INetwork construction elapsed time: 0:00:00.001505
2023-03-27 05:27:49.880
Build TRT engine elapsed time: 0:00:02.930511
Build TRT engine elapsed time: 0:00:02.930511
Lowering submodule _run_on_acc_188 elapsed time 0:00:02.967487
Lowering submodule _run_on_acc_188 elapsed time 0:00:02.967487
Now lowering submodule _run_on_acc_190
Now lowering submodule _run_on_acc_190
split_name=_run_on_acc_190, input_specs=[InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_190, input_specs=[InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 128, 192, 320]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.002011
TRT INetwork construction elapsed time: 0:00:00.002011
2023-03-27 05:27:54.905
Build TRT engine elapsed time: 0:00:04.971792
Build TRT engine elapsed time: 0:00:04.971792
Lowering submodule _run_on_acc_190 elapsed time 0:00:05.010673
Lowering submodule _run_on_acc_190 elapsed time 0:00:05.010673
Now lowering submodule _run_on_acc_192
Now lowering submodule _run_on_acc_192
split_name=_run_on_acc_192, input_specs=[InputTensorSpec(shape=torch.Size([1, 64, 384, 640]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_192, input_specs=[InputTensorSpec(shape=torch.Size([1, 64, 384, 640]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.000998
TRT INetwork construction elapsed time: 0:00:00.000998
2023-03-27 05:27:57.915
Build TRT engine elapsed time: 0:00:02.957856
Build TRT engine elapsed time: 0:00:02.957856
Lowering submodule _run_on_acc_192 elapsed time 0:00:02.992994
Lowering submodule _run_on_acc_192 elapsed time 0:00:02.992994
Now lowering submodule _run_on_acc_194
Now lowering submodule _run_on_acc_194
split_name=_run_on_acc_194, input_specs=[InputTensorSpec(shape=torch.Size([1, 64, 384, 640]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 64, 384, 640]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_194, input_specs=[InputTensorSpec(shape=torch.Size([1, 64, 384, 640]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True), InputTensorSpec(shape=torch.Size([1, 64, 384, 640]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.000999
TRT INetwork construction elapsed time: 0:00:00.000999
2023-03-27 05:28:00.995
Build TRT engine elapsed time: 0:00:03.027250
Build TRT engine elapsed time: 0:00:03.027250
Lowering submodule _run_on_acc_194 elapsed time 0:00:03.063577
Lowering submodule _run_on_acc_194 elapsed time 0:00:03.063577
Now lowering submodule _run_on_acc_196
Now lowering submodule _run_on_acc_196
split_name=_run_on_acc_196, input_specs=[InputTensorSpec(shape=torch.Size([1, 64, 384, 640]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
split_name=_run_on_acc_196, input_specs=[InputTensorSpec(shape=torch.Size([1, 64, 384, 640]), dtype=torch.float16, device=device(type='cuda', index=0), shape_ranges=[], has_batch_dim=True)]
Timing cache is used!
Timing cache is used!
TRT INetwork construction elapsed time: 0:00:00.001001
TRT INetwork construction elapsed time: 0:00:00.001001
2023-03-27 05:28:01.054
Failed to evaluate the script:
Python exception:
Traceback (most recent call last):
File "src\cython\vapoursynth.pyx", line 2866, in vapoursynth._vpy_evaluate
File "src\cython\vapoursynth.pyx", line 2867, in vapoursynth._vpy_evaluate
File "J:\tmp\tempPreviewVapoursynthFile05_19_33_068.vpy", line 38, in
clip = FeMaSR(clip=clip, device_index=0, trt=True, trt_cache_path=r"J:\tmp") # 640x352
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsfemasr\__init__.py", line 171, in femasr
module = lowerer(
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\lower.py", line 323, in __call__
return do_lower(module, inputs)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\passes\pass_utils.py", line 117, in pass_with_validation
processed_module = pass_(module, input, *args, **kwargs)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\lower.py", line 320, in do_lower
lower_result = pm(module)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\fx\passes\pass_manager.py", line 240, in __call__
out = _pass(out)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\fx\passes\pass_manager.py", line 240, in __call__
out = _pass(out)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\passes\lower_pass_manager_builder.py", line 167, in lower_func
lowered_module = self._lower_func(
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\lower.py", line 180, in lower_pass
interp_res: TRTInterpreterResult = interpreter(mod, input, module_name)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\lower.py", line 132, in __call__
interp_result: TRTInterpreterResult = interpreter.run(
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\fx2trt.py", line 252, in run
assert engine
AssertionError
=> I'll try whether going back to an older driver version helps (after work today).
Your log seems weird to me because almost every sentence repeats twice. Also please use Pastebin or Gist to post such a long log so as to let people view conversation much easier.
By the way, I could successfully build TensorRT engine on a 320x240 clip with my RTX 3050 (8GB VRAM, Game Ready driver 531.41). The build log is https://pastebin.com/wr092PqX.
import vapoursynth as vs
from vsfemasr import femasr
core = vs.core
clip = core.std.BlankClip(width=320, height=240, format=vs.RGBH, length=1)
clip = femasr(clip, trt=True)
clip.set_output()
However, on a 640x360 clip it failed quite quickly. The build log is https://pastebin.com/AKEv2bWL.
even when using:
# Imports
import vapoursynth as vs
# getting Vapoursynth core
core = vs.core
import site
import os
import ctypes
# Adding torch dependencies to PATH
path = site.getsitepackages()[0]+'/torch_dependencies/bin/'
ctypes.windll.kernel32.SetDllDirectoryW(path)
path = path.replace('\\', '/')
os.environ["PATH"] = path + os.pathsep + os.environ["PATH"]
os.environ["CUDA_MODULE_LOADING"] = "LAZY"
from vsfemasr import femasr
clip = core.std.BlankClip(width=320, height=240, format=vs.RGBH, length=1)
#clip = femasr(clip, trt=True)
clip.set_output()
I get the same issue. (https://pastebin.com/S1k050NB)
(removing the CUDA_MODULE_LOADING-line doesn't change anything either)
(vs-dpir works fine with trt, only femasr is having problems.)
It even happens when I use clip = core.std.BlankClip(width=128, height=128, format=vs.RGBH, length=1)
Seeing Unable to find layer norm plugin, fall back to TensorRT implementation
could I be missing some library?
Using:
I get:
with trt=False it works. Using trt_min_subgraph_size=5 doesn't help.
Any idea what could be causing this? (updated to NIVIDA Studio Drivers 531.41 a few days ago, could this be the cause?)