jimmywarting / StreamSaver.js

StreamSaver writes stream to the filesystem directly asynchronous
https://jimmywarting.github.io/StreamSaver.js/example.html
MIT License
4k stars 414 forks source link

Is it possible to use this to create large GIF files? #38

Closed pliablepixels closed 7 years ago

pliablepixels commented 7 years ago

Hi there, I posted this question in the gifshot repo

The situation here is I am using gif shot to create an animated GIF from multiple images. gifshot creates the blobs in memory, so obviously, after a while, it crashes after eating up all memory.

Is it somehow possible to combine gifshot and this library to be able to create an animated gif from a series of images, that can be save to disk/image gallery for mobiles - memory not being a limiting factor? Happy to put the same $100 bounty on it I offered in the gif shot repo if any talented dev. can take this up :-) (bounty awarded to jimmywarting)

jimmywarting commented 7 years ago

Wouldn't a video be better suited? Didn't find any streaming method to use with gifshot - it was more or less "here, take this input and give me a blob when it's ready" What have you tried so far?

pliablepixels commented 7 years ago

The problem is the media needs to be created in-phone (not server). So creating a video from a set of images is probably far more CPU intensive than a gif ? I have no problem if the output is a gif or a video -> as long as I can supply it sufficient images and it is able to stitch them into the animation without loading everything into memory (which is where I thought your library would be useful)

So far, I am only using gif shot as-is, and have had to limit the # of frames it can accept and have reduced the quality (sampling) - this results in me being able to create a GIF that is approx 70 frames worth of images @ 800x600 @ a lower sampling rate. If the supplied image array is > 70, I iterate through the array and remove every subsequent frame till array <=70. Beyond this magic number the app crashes because everything is loaded in memory. I can afford to remove images this way because the series of images is actually a camera feed that is recorded as images and stored in the server -> and deleting alternate frames is equivalent to reducing frames per second.

My gif shot code is here

If you are asking what have I tried so far in terms of integrating your library with gifshot, that is where the tiny bounty comes in - I don't know how to even start :p

jimmywarting commented 7 years ago

I have just realized gifshot isn't what makes the gif. It's but a wrapper around the deanm/omggif source. omggif don't seem to be able to stream/flush the binaries at all... gifshot only helps you create gifs from different input...

jimmywarting commented 7 years ago

It may help to switch the base64 to a blob... base64 takes up more memory...

jimmywarting commented 7 years ago

How large is the final gif in size (mb) if it contains 70 frames

pliablepixels commented 7 years ago

With sampling of 20, and 800x600 resize, it takes 22MB on disk While gif shot is constructing the image, it takes up 920MB of memory I've uploaded the image here if it helps at all. You won't see much movement except for the time moving as I forced an alarm as a test.

gifShot creates it in base64, so I do this https://github.com/pliablepixels/zmNinja/blob/master/www/js/EventCtrl.js#L989

I will be out of town till Thu, but will keep sneaking in - thanks for any insight!

jimmywarting commented 7 years ago

Okey, the total size of it shouldn't come anywhere close to 920mb if you only use 70 images and produces one single gif that is 22mb in size afterwards... It must clearly be some memory leak or bad usage in the other libs.

I think you shouldn't have to use StreamSaver for such low result

pliablepixels commented 7 years ago

Hi, I don't have the 70 images at the moment - I am out of town - but this is one of the images - just copying it 70 times should produce a similar result https://drive.google.com/file/d/0B3iQz0D8vxltc3pMOEt2a1ZMcTA/view?usp=sharing

pliablepixels commented 7 years ago

Do note the 22MB is after I reduce size of the image to 800x600 and only take 70 frames. In reality, the # of frames are typically 3-4000. I can't use that many frames because I run out of memory.

jimmywarting commented 7 years ago

I have been looking into using https://github.com/antimatter15/whammy as an alternative source and produce a video instead of a gif.

Tried to help them a bit to reduce memory https://github.com/antimatter15/whammy/issues/29 But it still had reference to all the frames durning compile 👎 It had no stdin or stdout stream-a-like mechanism (aka pipe) you want to send some pices and recive some when the RAM is taking a toll

Cuz both whammy and gifshot uses canvas, i remembered "hey it's actually possible to get a stream from canvas element!!!"

Then i just looked back to what i had done in the examples in the Readme

So i just used the that as a reference and created a simple MediaRecorder from frames uploaded by a file input (the result was a 233K webp video 1 sec long) (never figured out how to controll the frame rate or the video format - think it's possible to choose mp4 file format also...)

I have uploaded the final result and the frames used to create them to dropbox https://dl.dropboxusercontent.com/u/3464804/door.zip

Here is also the code i used to make it: https://jsfiddle.net/jrftnp13/1/

Note with this method, memory & size should not extend so large for you so you ever need StreamSaver the result was not even ½MB... StreamSaver is good for size over ~500MB But if you want to add all 4000 frames and the size goes way up then you can look at the readme example and hook that into ondataavalible

jimmywarting commented 7 years ago

LoL, just realize when comparing the gif to my video that i created the video backwards ^^ Probably had to do with something of how i exported/named all the frames in photoshop :P

pliablepixels commented 7 years ago

Hi Jimmy, am I just supposed to run the demo, upload a few pictures and it will show the link for the video when ready? I tried several times to upload a few images but it seems to keep waiting. Will this work on mobiles?

Finally, my app is on ionic/angular 1 - not v2 (so no ES6 features). I've set up a blank starter template http://codepen.io/pliablepixels/pen/qqrMQa -- I suppose your example can be ported to pre ES6 too?

jimmywarting commented 7 years ago

ES6 has nothing to do with angular 2. You are mixing it with typescript

And yes you should just run the demo and upload a few pics. What browsers are you testing? What dose the console log say? Try latest chrome

pliablepixels commented 7 years ago

Thanks - it worked in Chrome. Doesn't work on Safari - I'll run the debugger in more detail to see where it is getting stuck - my solution needs to work on both iOS (Safari) and Android (Chrome + WebView on Android 5.0+ phones) so I really am looking for a cross platform solution. Really appreciate your inputs so far.

jimmywarting commented 7 years ago

well, the MediaRecorder is limited to chrome, opera and firefox - ref: https://developer.mozilla.org/en-US/docs/Web/API/MediaRecorder#Browser_compatibility

pliablepixels commented 7 years ago

Ah shucks - this was so close :-(

jimmywarting commented 7 years ago

Maybe found something: https://github.com/devongovett/gif-stream That is if you still feel like you want a gif as output

pliablepixels commented 7 years ago

Thank you - I read with interest your comments here - my knowledge of JS isn't very deep, but @bfred-it is absolutely right - the amount of memory consumed while creating the GIF is of magnitudes higher than the resulting GIF on disk.

Were you able to find any mechanism that worked? I also saw your issue on gif-stream - it seems to use a node specific approach - I am looking for something that will work on both Chrome and Safari and without node.

jimmywarting commented 7 years ago

Think i have solved it... https://jsfiddle.net/jrftnp13/4/ I converted the omggif writer to a stream

The difference now is that whenever you have a other stream that consumes the gif stream it will pull for more data. And for each pull you got you should add a new frame either by controller.enqueue or resolve a promise

So if this part of the code had not been there

  let conf = {headers: {'content-type': 'image/gif'}}
  new Response(gif, conf).blob().then(blob => {
    console.log(blob.url())
  })

then the gifWriter wouldn't write any data cuz you have nothing else that is consuming the stream so there is no point for gif writer to do any work

A lower level of getting a blob would be like this (this is what Response.prototype.blob dose in the background only diffrent now is that you don't need a fetch polyfill)

var chunks = []
reader = gif.getReader()
pull = () => {
  // Each time you call `read()` here will make a pull (which result in you pull
  // function beeing executed and you end up painting a frame to the canvas)
  reader.read().then(({done, value}) => {
    // Value is first going to be some gif headers
    // other time it's going to be the biniaries of each frame
    // and the last of them when done == true it will only be one byte
    chunks.push(value)
    return done ? chunks : pull()
  })
}

pull().then(chunks => {
  console.log(new Blob(chunks))
})

So it it's almost something like this

.pipe( [img1, img2, img3] )
.pipe(img_to_pixel)
.pipe(gifWriter)
.pipe(output)

But if you would for example stop somewhere in the middle of the output the rest of the stream would halt and be put in a error'ed state meaning img3 will not even be read... the rest of the chain breaks if something, somewhere goes wrong

pliablepixels commented 7 years ago

Hi Jimmy, any chance you can convert your code to pre-ES6 syntax? Maybe fork this codepen and put your code there? http://codepen.io/pliablepixels/pen/qqrMQa

jimmywarting commented 7 years ago

You could use https://babeljs.io/repl/ to transpile it to es5

pliablepixels commented 7 years ago

I am running the latest demo on Chrome Version 54.0.2840.98 (64-bit) for OS X - the download button never shows up - seems to be getting stuck at processing the images

screen shot 2016-11-28 at 4 44 40 pm

jimmywarting commented 7 years ago

Did you remove this part?

        // force palette to be power of 2
        let powof2 = 1
        while (powof2 < palette.length) powof2 <<= 1
        palette.length = powof2

Also this time i didn't print the download button on the page - i just logged the blob url

pliablepixels commented 7 years ago

no - I saw that code and thought that was your override :-) the code worked on the door image I sent you, but not on another random jpg I pulled from my camera.

On another note, can you help porting this to pre-ES6 in a form I can import into my project using just a <script src> like GIFShot? I tried using babel with your code but it gave a lot of errors.

I'll be happy to offer up my proposed bounty for all the work you have done so far anyway, I'd appreciate if it could be ported to an environment I am familiar with :)

jimmywarting commented 7 years ago
<!-- Probably best if you download this resources -->
<!-- screw FileReader if you want that -->
<script src="https://cdn.rawgit.com/jimmywarting/Screw-FileReader/master/index.js"></script>
<!-- web streams polyfill (includes both Redable & Writable api) -->
<script src="https://cdn.rawgit.com/creatorrr/web-streams-polyfill/master/dist/polyfill.min.js"></script>

<script src="Gif Writer"></script>
<script src="code to adapt"></script>
GifWriter ```js 'use strict'; function _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError("Cannot call a class as a function"); } } function check_palette_and_num_colors(palette) { var num_colors = palette.length; if (num_colors < 2 || num_colors > 256 || num_colors & num_colors - 1) { throw new Error('Invalid code/color length, must be power of 2 and 2 .. 256.'); } return num_colors; } var GifWriter = function () { function GifWriter(rs, width, height) { var gopts = arguments.length > 3 && arguments[3] !== undefined ? arguments[3] : {}; _classCallCheck(this, GifWriter); var loop = gopts.loop, palette = gopts.palette; var p = 0; var buf = []; var global_palette = palette; if (width <= 0 || height <= 0 || width > 65535 || height > 65535) { throw new Error('Width/Height invalid.'); } // - Header. buf[p++] = 0x47;buf[p++] = 0x49;buf[p++] = 0x46; // GIF buf[p++] = 0x38;buf[p++] = 0x39;buf[p++] = 0x61; // 89a // Handling of Global Color Table (palette) and background index. var gp_num_colors_pow2 = 0; var background = 0; if (global_palette) { var gp_num_colors = check_palette_and_num_colors(global_palette); while (gp_num_colors >>= 1) { ++gp_num_colors_pow2; }gp_num_colors = 1 << gp_num_colors_pow2; gp_num_colors_pow2--; if (gopts.background !== undefined) { background = gopts.background; if (background >= gp_num_colors) { throw new Error('Background index out of range.'); } // The GIF spec states that a background index of 0 should be ignored, so // this is probably a mistake and you really want to set it to another // slot in the palette. But actually in the end most browsers, etc end // up ignoring this almost completely (including for dispose background). if (background === 0) throw new Error('Background index explicitly passed as 0.'); } } // - Logical Screen Descriptor. // NOTE(deanm): w/h apparently ignored by implementations, but set anyway. buf[p++] = width & 0xff; buf[p++] = width >> 8 & 0xff; buf[p++] = height & 0xff; buf[p++] = height >> 8 & 0xff; // NOTE: Indicates 0-bpp original color resolution (unused?). buf[p++] = (global_palette ? 0x80 : 0) | // Global Color Table Flag. gp_num_colors_pow2; // NOTE: No sort flag (unused?). buf[p++] = background; // Background Color Index. buf[p++] = 0; // Pixel aspect ratio (unused?). // - Global Color Table if (global_palette) { for (var _iterator = global_palette, _isArray = Array.isArray(_iterator), _i = 0, _iterator = _isArray ? _iterator : _iterator[Symbol.iterator]();;) { var _ref; if (_isArray) { if (_i >= _iterator.length) break; _ref = _iterator[_i++]; } else { _i = _iterator.next(); if (_i.done) break; _ref = _i.value; } var rgb = _ref; buf[p++] = rgb >> 16 & 0xff; buf[p++] = rgb >> 8 & 0xff; buf[p++] = rgb & 0xff; } } if (Number.isInteger(loop)) { // Netscape block for looping. if (loop < 0 || loop > 65535) throw "Loop count invalid."; // Extension code, label, and length. buf[p++] = 0x21;buf[p++] = 0xff;buf[p++] = 0x0b; // NETSCAPE2.0 buf[p++] = 0x4e;buf[p++] = 0x45;buf[p++] = 0x54;buf[p++] = 0x53; buf[p++] = 0x43;buf[p++] = 0x41;buf[p++] = 0x50;buf[p++] = 0x45; buf[p++] = 0x32;buf[p++] = 0x2e;buf[p++] = 0x30; // Sub-block buf[p++] = 0x03;buf[p++] = 0x01; buf[p++] = loop & 0xff;buf[p++] = loop >> 8 & 0xff; buf[p++] = 0x00; // Terminator. } var self = this; var reader = rs.getReader(); return new ReadableStream({ start: function start(controller) { controller.enqueue(new Uint8Array(buf)); }, pull: function pull(controller) { return reader.read().then(function (_ref2) { var done = _ref2.done, value = _ref2.value; if (done) { controller.enqueue(new Uint8Array([0x3b])); controller.close(); return; } self.addFrame.apply(self, [controller].concat(value)); }); } }); } GifWriter.prototype.addFrame = function addFrame(controller, x, y, w, h, indexed_pixels) { var opts = arguments.length > 6 && arguments[6] !== undefined ? arguments[6] : {}; var p = 0; var buf = []; // TODO(deanm): Bounds check x, y. Do they need to be within the virtual // canvas width/height, I imagine? if (x < 0 || y < 0 || x > 65535 || y > 65535) { throw new Error('x/y invalid.'); } if (w <= 0 || h <= 0 || w > 65535 || h > 65535) throw "Width/Height invalid."; if (indexed_pixels.length < w * h) throw "Not enough pixels for the frame size."; var using_local_palette = true; var palette = opts.palette; if (palette === undefined || palette === null) { using_local_palette = false; palette = global_palette; } if (palette === undefined || palette === null) throw "Must supply either a local or global palette."; var num_colors = check_palette_and_num_colors(palette); // Compute the min_code_size (power of 2), destroying num_colors. var min_code_size = 0; while (num_colors >>= 1) { ++min_code_size; }num_colors = 1 << min_code_size; // Now we can easily get it back. var delay = opts.delay === undefined ? 0 : opts.delay; // From the spec: // 0 - No disposal specified. The decoder is // not required to take any action. // 1 - Do not dispose. The graphic is to be left // in place. // 2 - Restore to background color. The area used by the // graphic must be restored to the background color. // 3 - Restore to previous. The decoder is required to // restore the area overwritten by the graphic with // what was there prior to rendering the graphic. // 4-7 - To be defined. // NOTE(deanm): Dispose background doesn't really work, apparently most // browsers ignore the background palette index and clear to transparency. var disposal = opts.disposal === undefined ? 0 : opts.disposal; if (disposal < 0 || disposal > 3) // 4-7 is reserved. throw "Disposal out of range."; var use_transparency = false; var transparent_index = 0; if (opts.transparent !== undefined && opts.transparent !== null) { use_transparency = true; transparent_index = opts.transparent; if (transparent_index < 0 || transparent_index >= num_colors) throw "Transparent color index."; } if (disposal !== 0 || use_transparency || delay !== 0) { // - Graphics Control Extension buf[p++] = 0x21;buf[p++] = 0xf9; // Extension / Label. buf[p++] = 4; // Byte size. buf[p++] = disposal << 2 | (use_transparency === true ? 1 : 0); buf[p++] = delay & 0xff;buf[p++] = delay >> 8 & 0xff; buf[p++] = transparent_index; // Transparent color index. buf[p++] = 0; // Block Terminator. } // - Image Descriptor buf[p++] = 0x2c; // Image Seperator. buf[p++] = x & 0xff;buf[p++] = x >> 8 & 0xff; // Left. buf[p++] = y & 0xff;buf[p++] = y >> 8 & 0xff; // Top. buf[p++] = w & 0xff;buf[p++] = w >> 8 & 0xff; buf[p++] = h & 0xff;buf[p++] = h >> 8 & 0xff; // NOTE: No sort flag (unused?). // TODO(deanm): Support interlace. buf[p++] = using_local_palette === true ? 0x80 | min_code_size - 1 : 0; // - Local Color Table if (using_local_palette === true) { for (var i = 0, il = palette.length; i < il; ++i) { var rgb = palette[i]; buf[p++] = rgb >> 16 & 0xff; buf[p++] = rgb >> 8 & 0xff; buf[p++] = rgb & 0xff; } } GifWriterOutputLZWCodeStream(buf, p, min_code_size < 2 ? 2 : min_code_size, indexed_pixels); controller.enqueue(new Uint8Array(buf)); }; return GifWriter; }(); // Main compression routine, palette indexes -> LZW code stream. // |index_stream| must have at least one entry. function GifWriterOutputLZWCodeStream(buf, p, min_code_size, index_stream) { buf[p++] = min_code_size; var cur_subblock = p++; // Pointing at the length field. var clear_code = 1 << min_code_size; var code_mask = clear_code - 1; var eoi_code = clear_code + 1; var next_code = eoi_code + 1; var cur_code_size = min_code_size + 1; // Number of bits per code. var cur_shift = 0; // We have at most 12-bit codes, so we should have to hold a max of 19 // bits here (and then we would write out). var cur = 0; function emit_bytes_to_buffer(bit_block_size) { while (cur_shift >= bit_block_size) { buf[p++] = cur & 0xff; cur >>= 8;cur_shift -= 8; if (p === cur_subblock + 256) { // Finished a subblock. buf[cur_subblock] = 255; cur_subblock = p++; } } } function emit_code(c) { cur |= c << cur_shift; cur_shift += cur_code_size; emit_bytes_to_buffer(8); } // I am not an expert on the topic, and I don't want to write a thesis. // However, it is good to outline here the basic algorithm and the few data // structures and optimizations here that make this implementation fast. // The basic idea behind LZW is to build a table of previously seen runs // addressed by a short id (herein called output code). All data is // referenced by a code, which represents one or more values from the // original input stream. All input bytes can be referenced as the same // value as an output code. So if you didn't want any compression, you // could more or less just output the original bytes as codes (there are // some details to this, but it is the idea). In order to achieve // compression, values greater then the input range (codes can be up to // 12-bit while input only 8-bit) represent a sequence of previously seen // inputs. The decompressor is able to build the same mapping while // decoding, so there is always a shared common knowledge between the // encoding and decoder, which is also important for "timing" aspects like // how to handle variable bit width code encoding. // // One obvious but very important consequence of the table system is there // is always a unique id (at most 12-bits) to map the runs. 'A' might be // 4, then 'AA' might be 10, 'AAA' 11, 'AAAA' 12, etc. This relationship // can be used for an effecient lookup strategy for the code mapping. We // need to know if a run has been seen before, and be able to map that run // to the output code. Since we start with known unique ids (input bytes), // and then from those build more unique ids (table entries), we can // continue this chain (almost like a linked list) to always have small // integer values that represent the current byte chains in the encoder. // This means instead of tracking the input bytes (AAAABCD) to know our // current state, we can track the table entry for AAAABC (it is guaranteed // to exist by the nature of the algorithm) and the next character D. // Therefor the tuple of (table_entry, byte) is guaranteed to also be // unique. This allows us to create a simple lookup key for mapping input // sequences to codes (table indices) without having to store or search // any of the code sequences. So if 'AAAA' has a table entry of 12, the // tuple of ('AAAA', K) for any input byte K will be unique, and can be our // key. This leads to a integer value at most 20-bits, which can always // fit in an SMI value and be used as a fast sparse array / object key. // Output code for the current contents of the index buffer. var ib_code = index_stream[0] & code_mask; // Load first input index. var code_table = {}; // Key'd on our 20-bit "tuple". emit_code(clear_code); // Spec says first code should be a clear code. // First index already loaded, process the rest of the stream. for (var i = 1, il = index_stream.length; i < il; ++i) { var k = index_stream[i] & code_mask; var cur_key = ib_code << 8 | k; // (prev, k) unique tuple. var cur_code = code_table[cur_key]; // buffer + k. // Check if we have to create a new code table entry. if (cur_code === undefined) { // We don't have buffer + k. // Emit index buffer (without k). // This is an inline version of emit_code, because this is the core // writing routine of the compressor (and V8 cannot inline emit_code // because it is a closure here in a different context). Additionally // we can call emit_byte_to_buffer less often, because we can have // 30-bits (from our 31-bit signed SMI), and we know our codes will only // be 12-bits, so can safely have 18-bits there without overflow. // emit_code(ib_code); cur |= ib_code << cur_shift; cur_shift += cur_code_size; while (cur_shift >= 8) { buf[p++] = cur & 0xff; cur >>= 8;cur_shift -= 8; if (p === cur_subblock + 256) { // Finished a subblock. buf[cur_subblock] = 255; cur_subblock = p++; } } if (next_code === 4096) { // Table full, need a clear. emit_code(clear_code); next_code = eoi_code + 1; cur_code_size = min_code_size + 1; code_table = {}; } else { // Table not full, insert a new entry. // Increase our variable bit code sizes if necessary. This is a bit // tricky as it is based on "timing" between the encoding and // decoder. From the encoders perspective this should happen after // we've already emitted the index buffer and are about to create the // first table entry that would overflow our current code bit size. if (next_code >= 1 << cur_code_size) ++cur_code_size; code_table[cur_key] = next_code++; // Insert into code table. } ib_code = k; // Index buffer to single input k. } else { ib_code = cur_code; // Index buffer to sequence in code table. } } emit_code(ib_code); // There will still be something in the index buffer. emit_code(eoi_code); // End Of Information. // Flush / finalize the sub-blocks stream to the buffer. emit_bytes_to_buffer(1); // Finish the sub-blocks, writing out any unfinished lengths and // terminating with a sub-block of length 0. If we have already started // but not yet used a sub-block it can just become the terminator. if (cur_subblock + 1 === p) { // Started but unused. buf[cur_subblock] = 0; } else { // Started and used, write length and additional terminator block. buf[cur_subblock] = p - cur_subblock - 1; buf[p++] = 0; } return p; } ```
Code you need to adapt ```js 'use strict'; $input.onchange = function () { var files = Array.from($input.files); var ctx = $canvas.getContext('2d'); var w = void 0, h = void 0, pixels = void 0; var rs = new ReadableStream({ start: function start() { // No pull request will be called until this promise is resolved return files[0].image().then(function (img) { URL.revokeObjectURL(img.src); // Revoke object URL to free memory w = img.width; h = img.height; // initiate the canvas with first frame + width/height $canvas.width = w; $canvas.height = h; pixels = new Uint8Array(w * h); }); }, // Each time pull gets called you should get the pixel data and // enqueue it as if it would be good old gif.addFrame() pull: function pull(controller) { var frame = files.shift(); if (!frame) controller.close(); // here you could fetch the image from the network if you want // fetch(url).then(res => blob()).then(blob => blob.image).then(img => { ... }) return frame.image().then(function (img) { URL.revokeObjectURL(img.src); // Revoke object URL to free memory ctx.drawImage(img, 0, 0); var data = ctx.getImageData(0, 0, w, h).data; var palette = []; for (var j = 0, k = 0, jl = data.length; j < jl; j += 4, k++) { var r = Math.floor(data[j + 0] * 0.1) * 10; var g = Math.floor(data[j + 1] * 0.1) * 10; var b = Math.floor(data[j + 2] * 0.1) * 10; var color = r << 16 | g << 8 | b << 0; var index = palette.indexOf(color); if (index === -1) { pixels[k] = palette.length; palette.push(color); } else { pixels[k] = index; } } // force palette to be power of 2 var powof2 = 1; while (powof2 < palette.length) { powof2 <<= 1; }palette.length = powof2; controller.enqueue([0, 0, w, h, pixels, { palette: new Uint32Array(palette), delay: 5 }]); }); } }); var gif = new GifWriter(rs, w, h, { loop: 0 }); var conf = { headers: { 'content-type': 'image/gif' } }; new Response(gif, conf).blob().then(function (blob) { console.log(blob.url()); }); }; ```

And since all your targeted browser don't support fetch or especially the WebStream api then you are going to need

this Reader

var chunks = [];
var reader = gif.getReader();

function pull() {
  reader.read().then(function (result) {
    chunks.push(result.value);
    return result.done ? chunks : pull();
  });
}

pull().then(function (chunks) {
  console.log(new Blob(chunks, {type: "image/gif"}));
});

instead of this ```js var conf = { headers: { 'content-type': 'image/gif' } }; new Response(gif, conf).blob().then(function (blob) { console.log(blob.url()); }); ```
pliablepixels commented 7 years ago

Thank you for your amazing help so far. I'll adapt it to my codepen tomorrow so I can run it by you - I'm sure I'll have a few more questions before I actually get it working in the codepen I posted above.

Till then, how do I go about giving you the bounty? Do you have a bounty source team or PayPal id?

pliablepixels commented 7 years ago

Progress so far http://codepen.io/pliablepixels/pen/qqrMQa --> I'm trying to adapt your code so there are no images to upload - I am passing an array of URLs. Is there an easy way for me to emulate the upload function and return a correct file array just like input = file does?

jimmywarting commented 7 years ago

http://codepen.io/anon/pen/ObxGZV

pliablepixels commented 7 years ago

Please check your email

pliablepixels commented 7 years ago

@jimmywarting can you review your forced power of 2 code? gifwriter requires the palette colors to be powers of 2 and <=256. The current code often converts it to 4096, resulting in gif writer error. I'll try and limit it to 256 and see what happens

pliablepixels commented 7 years ago

forcing it to a max of 256 produces really dithered colors - any thoughts ? foo

jimmywarting commented 7 years ago

Wow... That's bad

Will see if I can think of something

pliablepixels commented 7 years ago

so it looks like gif shot is using some image quantization to reduce the palette - GIF can't exceed 256

jimmywarting commented 7 years ago

Looks like https://github.com/devongovett/neuquant is needed

pliablepixels commented 7 years ago

right - the gif shot version looks like its packed into a module https://github.com/yahoo/gifshot/blob/master/src/modules/dependencies/NeuQuant.js

pliablepixels commented 7 years ago

Okay never mind - it was easier to just put NeoQuant back This is my final code - I'll post memory results after more testing and close soon! Thanks for your amazing help

.then(function(img)
                    {

                        console.log ("URL="+frame);
                        URL.revokeObjectURL(img.src); 
                        ctx.drawImage(img, 0, 0);

                        var data = ctx.getImageData(0, 0, w, h).data;
                        var rgbComponents = dataToRGB(data, w, h);
                        var nq = new NeuQuant(rgbComponents, rgbComponents.length, 10);
                        var paletteRGB = nq.process();
                        var paletteArray = new Uint32Array(componentizedPaletteToArray(paletteRGB));
                        var numberPixels =  w * h;
                        var k = 0, i, r, g, b;

                                for (i = 0; i < numberPixels; i++) {
                                  r = rgbComponents[k++];
                                  g = rgbComponents[k++];
                                  b = rgbComponents[k++];
                                  pixels[i] = nq.map(r, g, b);
                                }

                        controller.enqueue([0, 0, w, h, pixels,
                        {
                            palette: paletteArray,
                            delay: 5
                        }]);
                    });
jimmywarting commented 7 years ago

hoping for the best

pliablepixels commented 7 years ago

Huge difference. Benchmarked it on both iOS and Desktop (need to check android) Bottom line, an image set of around 60 images @ 800x600 that was topping 1GB memory (thusly crashing in iOS) in gifshot remains < 200MB in your modified version. Unfortunately, it looks like I need to convert to base64 to save to the iOS phone gallery which takes up 3x the memory at the end - even so, its a huge improvement. gifShot also converts obj.image to base64, which is where it would crash. I'll close this after I check on Android!

I do think this approach is noticeably slower than gifshot but not 100% sure.

screen shot 2016-11-30 at 1 22 54 pm

jimmywarting commented 7 years ago

try include bluebird to see if there is any noticeable differens in speed

pliablepixels commented 7 years ago

It's weird, but after successful invocation of this routine, core angular seems to throw this error (I intercept it and continue, but I'm curious why it happens with your steps)

TypeError: undefined is not an object (evaluating 'parsed.protocol') caused by undefined = basically it looks like an http call is made with no config object somewhere in the code we are using

stack:

ionic.bundle.js:25642 TypeError: Cannot read property 'protocol' of undefined
    at urlIsSameOrigin (file:///Users/winshars/projects/phonegap/zmNinja-mac.app/Contents/Resources/app.asar/lib/ionic/js/ionic.bundle.js:31164:17)
    at sendReq (file:///Users/winshars/projects/phonegap/zmNinja-mac.app/Contents/Resources/app.asar/lib/ionic/js/ionic.bundle.js:23638:25)
    at serverRequest (file:///Users/winshars/projects/phonegap/zmNinja-mac.app/Contents/Resources/app.asar/lib/ionic/js/ionic.bundle.js:23357:16)
    at processQueue (file:///Users/winshars/projects/phonegap/zmNinja-mac.app/Contents/Resources/app.asar/lib/ionic/js/ionic.bundle.js:27879:28)
    at file:///Users/winshars/projects/phonegap/zmNinja-mac.app/Contents/Resources/app.asar/lib/ionic/js/ionic.bundle.js:27895:27
    at Scope.$eval (file:///Users/winshars/projects/phonegap/zmNinja-mac.app/Contents/Resources/app.asar/lib/ionic/js/ionic.bundle.js:29158:28)
    at Scope.$digest (file:///Users/winshars/projects/phonegap/zmNinja-mac.app/Contents/Resources/app.asar/lib/ionic/js/ionic.bundle.js:28969:31)
    at Scope.$apply (file:///Users/winshars/projects/phonegap/zmNinja-mac.app/Contents/Resources/app.asar/lib/ionic/js/ionic.bundle.js:29263:24)
    at file:///Users/winshars/projects/phonegap/zmNinja-mac.app/Contents/Resources/app.asar/lib/ionic/js/ionic.bundle.js:31030:36
    at completeOutstandingRequest (file:///Users/winshars/projects/phonegap/zmNinja-mac.app/Contents/Resources/app.asar/lib/ionic/js/ionic.bundle.js:18706:10) undefined
jimmywarting commented 7 years ago

I'm afraid i can't help you with that part, hard to debug a bundle version

pliablepixels commented 7 years ago

no worries - I've zeroed into the issue - its when you are doing this

if (!frame) controller.close();}

I have no idea what that is doing but I get the feeling its not 'cleaning up' -> I'll explore more

pliablepixels commented 7 years ago

problem solved, it was missing a return after controller close, resulting in a null http.

pliablepixels commented 7 years ago

android - memory crash for the same image set. Not in your part, but the part where I need to convert your code to base64 to store into phone gallery. sigh. I'll investigate android in more detail tomorrow. I couldn't collect memory graphs as dev tools came down with it. So I don't know how much was consumed - I'll investigate tomorrow if Dev Tools allows for a way to dynamically look at memory - without having to record it in timeline (if it crashes, its of no help)

pliablepixels commented 7 years ago

memory consumption in android (before it crashed at base64encode) screen shot 2016-11-30 at 4 58 02 pm

So effectively, your part of the code is great. Now I need to find out how to write this to the gallery without a base64encode or at least not in memory.

I'm going to close this at this stage. Thanks @jimmywarting - your help has been amazing.

jimmywarting commented 7 years ago

It's maybe still possible to use StreamSaver with this (if you are running chrome on android or the device supports service workers + native ReadableStream) Since GifWriter use web streams now. And since you included the web streams polyfill you could just do this:

var readableStream = $scope.createGif(files, img.width, img.height);
var fileStream = streamSaver.createWriteStream('image.gif')
readableStream.pipeTo(fileStream)
pliablepixels commented 7 years ago

Thanks - this did not work on iOS (no readable stream) - I finally figured out how to resolve this -- its not necessary to convert to base64 to write to the photo gallery, you can download to a file and save that file to gallery. The problem however is devices crash if you try to write a large file to FS (>5MB or so). The solution was to write it in chunks

jimmywarting commented 7 years ago

Then you can write each chunk you get from reader.read() 👍