Open GoogleCodeExporter opened 9 years ago
Yeah, I'm not surprised that performance sucks, since Gears methods like concat
and
getBytes have to cross a process boundary, with Chrome's multi-process
architecture.
Given that you can pass Blobs between workers, and IIRC a Chrome/Gears worker
computes
in the Gears process, then try passing the Blob that you get from
desktop.openFiles to
a worker, do the concat/getBytes in the worker, and pass the result back to the
main
thread.
Original comment by nigel.ta...@gmail.com
on 10 Oct 2009 at 11:02
[deleted comment]
[deleted comment]
Implemented this trough a worker. Still takes about a minute to read 1Mb in
Chrome.
The only benefit of using the worker is that the UI is not frozen and no
"unresponsive script" alerts are shown.
There is more here than just cross-process performance issues.
getBytes() returns quickly in the first several steps, but as the loop
progresses, it
takes more and more time (specifically in Chrome). With random access
files/blobs it
should not matter whether you read at the beginning of the file or at the end.
Thus I
suspect that internally (only in Chrome?) access is done sequentially, otherwise
there is no reason why accessing 1000-th block should be much slower than 10-th.
Original comment by michael....@gmail.com
on 12 Oct 2009 at 12:00
Have you tried to slice your huge file to "sub-blobs" of 1024 bytes using
blob.slice()?
You could then use getBytes on each sub-blob to access the bytes of the file in
your
loop. This could remove your looping delay, because AFAIK blob.slice() has a
much
better performance than blob.getBytes()
Original comment by fbuchin...@gmail.com
on 12 Nov 2009 at 10:41
Yes, blob.slice() is also degrading in performance as you move further from the
beginning.
Original comment by michael....@gmail.com
on 12 Nov 2009 at 8:14
Original issue reported on code.google.com by
michael....@gmail.com
on 9 Oct 2009 at 8:55