jerch / xterm-addon-image

Image addon for xterm.js
MIT License
51 stars 6 forks source link

iTerm IIP support #43

Closed jerch closed 1 year ago

jerch commented 1 year ago

Supported formats:

Other formats are possible but not yet a goal (theoretically only limited by browser support). Same with animated GIF or APNG - we have currently no way to output animated image content, as we do canvas scraping driven by xterm.js' render cycle. Maybe with a later incarnation...

TODO:

Issues:

jerch commented 1 year ago

With the last commit we gain 3-6 times faster processing speed by avoiding string + gc pressure, also atob seems to be much slower than the custom base64 decoder. Things now get usable with typical web optimized image sizes, but we are not there yet - for bigger images the blocking of the base64 decoder + blob creation on the mainthread is a clear showstopper (~70ms for a 25MB png file).

jerch commented 1 year ago

Seems the new base64 decoder is quite speedy:

   Context "addons/xterm-addon-image/out/base64.benchmark.js"
      Context "Base64"
         Context "Node - Buffer"
            Case "decode - 256" : 100 runs - average throughput: 28.05 MB/s
            Case "decode - 4096" : 100 runs - average throughput: 241.21 MB/s
            Case "decode - 65536" : 100 runs - average throughput: 525.54 MB/s
            Case "decode - 1048576" : 100 runs - average throughput: 559.42 MB/s
         Context "Base64Decoder"
            Case "decode - 256" : 100 runs - average throughput: 39.17 MB/s
            Case "decode - 4096" : 100 runs - average throughput: 439.40 MB/s
            Case "decode - 65536" : 100 runs - average throughput: 1226.31 MB/s
            Case "decode - 1048576" : 100 runs - average throughput: 1392.19 MB/s
jerch commented 1 year ago

Some promising results with base64 decoding in SIMD (needs nodejs v16 to run):

   Context "addons/xterm-addon-image/out/base64.benchmark.js"
      Context "Base64"
         Context "Node - Buffer"
            Case "decode - 256" : 100 runs - average throughput: 37.66 MB/s
            Case "decode - 4096" : 100 runs - average throughput: 286.23 MB/s
            Case "decode - 65536" : 100 runs - average throughput: 520.20 MB/s
            Case "decode - 1048576" : 100 runs - average throughput: 580.90 MB/s
         Context "Base64Decoder"
            Case "decode - 256" : 100 runs - average throughput: 43.50 MB/s
            Case "decode - 4096" : 100 runs - average throughput: 532.61 MB/s
            Case "decode - 65536" : 100 runs - average throughput: 1325.31 MB/s
            Case "decode - 1048576" : 100 runs - average throughput: 2704.22 MB/s

Not sure yet whether to go that path, as it creates frictions with Safari (still needs the scalar version).

Edit: Will stick with the scalar version for now, as it is still fast enough and we have workarounds in place for all other Safari shortcomings. Still I might have to drop Safari support in the future, as it turns more and more into a big hindrance for better functionality:

In long term all these issues will be a dealbreaker for Safari, they better get things supported or Safari will die as a target platform for many web developers needing performant solutions. I really dont get it why apple neglect its once superior engine and turns it into the new ugly kid.