gnsmrky / pytorch-fast-neural-style-onnxjs

Running fast neural style with ONNX.js
6 stars 0 forks source link

Mobile OS (Android & iOS) op benchmark results #4

Open gnsmrky opened 5 years ago

gnsmrky commented 5 years ago

This issue will be kept open for op benchmark result posting for Mobile OS (Android & iOS).

For correct output values, reference the output in Desktop OS op benchmark results.

gnsmrky commented 5 years ago
PyTorch fast-neural-style (FNS) benchmark using ONNX.js v0.1.5
Date: 2019/4/7     18:19:12

os: Android 8.0.0
browser: Chrome 73.0.3683.90
engine: WebKit 537.36

cpu arch: undefined
gpu: Adreno (TM) 540

ONNX.js backend: webgl
loading './onnx_models/mosaic_nc8_zeropad_128x128.onnx'
load time: 341.000ms
warming up tensors... 
warm up time: 1101.800ms

inference time #1: 322.900ms
inference time #2: 403.800ms
inference time #3: 374.200ms
inference time #4: 344.000ms
inference time #5: 308.700ms
inference time #6: 330.700ms
inference time #7: 294.800ms
inference time #8: 402.000ms
inference time #9: 334.500ms
inference time #10: 325.400ms
inference time #11: 376.600ms
inference time #12: 307.300ms
inference time #13: 332.300ms
inference time #14: 359.300ms
inference time #15: 384.100ms
inference time #16: 366.300ms
inference time #17: 348.500ms
inference time #18: 305.100ms
inference time #19: 375.700ms
inference time #20: 496.500ms
inference time #21: 563.000ms
inference time #22: 435.600ms
inference time #23: 354.300ms
inference time #24: 525.100ms
inference time #25: 372.200ms
inference time #26: 372.900ms
inference time #27: 397.300ms
inference time #28: 498.400ms
inference time #29: 413.400ms
inference time #30: 521.000ms

average inference time: 384.863ms
gnsmrky commented 5 years ago
PyTorch fast-neural-style (FNS) benchmark using ONNX.js v0.1.5
Date: 2019/4/7     18:20:37

os: Android 8.0.0
browser: Chrome 73.0.3683.90
engine: WebKit 537.36

cpu arch: undefined
gpu: Adreno (TM) 540

ONNX.js backend: wasm
loading './onnx_models/mosaic_nc8_128x128_onnxjs014_cpu.onnx'
load time: 966.200ms
warming up tensors... 
warm up time: 1356.800ms

inference time #1: 807.400ms
inference time #2: 942.800ms
inference time #3: 932.000ms
inference time #4: 911.900ms
inference time #5: 904.400ms
inference time #6: 854.100ms
inference time #7: 853.500ms
inference time #8: 863.900ms
inference time #9: 871.400ms
inference time #10: 937.900ms
inference time #11: 931.300ms
inference time #12: 895.300ms
inference time #13: 899.600ms
inference time #14: 899.300ms
inference time #15: 885.500ms
inference time #16: 927.000ms
inference time #17: 959.700ms
inference time #18: 851.500ms
inference time #19: 897.100ms
inference time #20: 877.700ms
inference time #21: 925.600ms
inference time #22: 847.700ms
inference time #23: 877.200ms
inference time #24: 925.500ms
inference time #25: 925.200ms
inference time #26: 872.100ms
inference time #27: 910.000ms
inference time #28: 931.000ms
inference time #29: 846.600ms
inference time #30: 909.800ms

average inference time: 895.800ms
gnsmrky commented 5 years ago
PyTorch fast-neural-style (FNS) benchmark using ONNX.js v0.1.5
Date: 2019/4/7     18:25:25

os: Android 8.0.0
browser: Chrome 73.0.3683.90
engine: WebKit 537.36

cpu arch: undefined
gpu: Adreno (TM) 540

ONNX.js backend: webgl
loading './onnx_models/mosaic_nc8_zeropad_256x256.onnx'
load time: 1873.800ms
warming up tensors... 
warm up time: 2478.800ms

inference time #1: 2256.000ms
inference time #2: 2156.400ms
inference time #3: 2150.800ms
inference time #4: 2210.400ms
inference time #5: 2203.300ms
inference time #6: 2112.700ms
inference time #7: 2109.400ms
inference time #8: 2253.200ms
inference time #9: 2068.500ms
inference time #10: 2261.000ms
inference time #11: 2171.400ms
inference time #12: 2164.900ms
inference time #13: 2135.100ms
inference time #14: 2114.800ms
inference time #15: 2169.200ms
inference time #16: 2171.800ms
inference time #17: 2123.600ms
inference time #18: 2144.200ms
inference time #19: 2128.400ms
inference time #20: 2220.000ms
inference time #21: 2215.700ms
inference time #22: 2147.300ms
inference time #23: 2218.200ms
inference time #24: 2099.800ms
inference time #25: 2201.100ms
inference time #26: 2121.300ms
inference time #27: 1998.100ms
inference time #28: 2161.100ms
inference time #29: 2161.000ms
inference time #30: 2216.600ms

average inference time: 2162.177ms