fur-q / mp4v2

Automatically exported from code.google.com/p/mp4v2
Other
0 stars 0 forks source link

Extremely Poor Performace with Native Win32 - Std Fileprovider #50

Open GoogleCodeExporter opened 8 years ago

GoogleCodeExporter commented 8 years ago
What steps will reproduce the problem?
1. Open file for modification
2. copy samples to single file and close
3. Repeat this 20 to 50 times (or more)

I upgraded my libmp4v2 library from a trunk version dated 2008-12-18.

The modifications for cross-platform abstraction for file I/O and the
switch to Native windows function (CreateFileA ) creates serious file read
write performance degradation.  

Reading about 40 audio files (MP4Modify, Read/write samples, MP4Close ...
do again for next file and so on) and combining them in a single m4a file
takes 50 secs using the old file routines.  The new routines take over 5
minutes and nearly max out /hang the cpu.

I reverted all the file routines back to before the R296 change.
Performance was restored.

What is the expected output? What do you see instead?
reasonable read/write performance for multiple open/close of files.

What version of the product are you using? On what operating system?
Version 1.9, 1.9.1 and trunk R355 all exhibit the problem.
Tried on WinXP SP3 and Win7

Please provide any additional information below.

Original issue reported on code.google.com by jeffreyl...@gmail.com on 11 Jan 2010 at 10:01

GoogleCodeExporter commented 8 years ago
I have a fix for this, but I really can't figure out how to commit changes.  The
problem appears to be that for some reason which I can't figure out, using the 
Win32
API doesn't buffer, so each of the tiny reads goes back to the disk.  What I 
did was
incorporate a 4K buffer in the file_win32.cpp file.   It makes it much, much 
faster
in both reading and writing.  I've attached my file.  Trying building with it 
and see
if it fixes the performance problem.

Dan

Original comment by danahins...@gmail.com on 8 Feb 2010 at 4:03

Attachments:

GoogleCodeExporter commented 8 years ago
jeffreyloden,

Could you try this again with the most recent trunk?  And is there some way you 
could give me your test code so I could try and repeat it?  There were some 
more changes to the native windows functions (mostly to deal with unicode), but 
I'd like to know if you still have the same issues.

One thought I had is to improve the file provider abstraction so users can more 
easily create their own read/write routines (in particular, I'd like to write 
one using Qt). 

Thanks,
kidjan

Original comment by kid...@gmail.com on 22 Sep 2010 at 5:34

GoogleCodeExporter commented 8 years ago
Just to see, I tried this with the most current rev, and the problem still 
exists.  Not sure if there are any developers that run on Windows, but if so, 
my fix greatly speeds up reads and writes.  I would be glad to work with anyone 
that would be interested in getting this into the sources.  

Very easy to test, on Windows run a read and a write with and without the 
changes and notice the speedup.  I've got thousands of users using my version 
and I've never had a problem with it.

Dan

Original comment by danahins...@gmail.com on 18 Mar 2012 at 10:54

GoogleCodeExporter commented 8 years ago
What happens if you change the flags used?  For example, see:

http://msdn.microsoft.com/en-us/library/aa363858.aspx#caching_behavior

...in particular,

"Specifying the FILE_FLAG_SEQUENTIAL_SCAN flag can increase performance for 
applications that read large files using sequential access. Performance gains 
can be even more noticeable for applications that read large files mostly 
sequentially, but occasionally skip forward over small ranges of bytes. If an 
application moves the file pointer for random access, optimum caching 
performance most likely will not occur. However, correct operation is still 
guaranteed."

Looks like the code is currently using FILE_ATTRIBUTE_NORMAL (see 
http://code.google.com/p/mp4v2/issues/attachmentText?id=50&aid=-8024171265002390
40&name=File_win32.cpp&token=4idgdzVy-fM3qodZaF02-OC04Xs%3A1332118380013#46 ); 
might be interesting to see what happens if you put in 
FILE_FLAG_SEQUENTIAL_SCAN since most MP4 file reading probably "read(s) large 
files mostly sequentially, but occasionally skip forward over small ranges of 
bytes"

Not opposed to the patch, but I'd prefer a solution that leveraged the existing 
file API rather than caching in user land.

Original comment by kid...@gmail.com on 19 Mar 2012 at 1:32

GoogleCodeExporter commented 8 years ago
Also, this sort of sheds some light on this API:

http://blogs.msdn.com/b/oldnewthing/archive/2012/01/20/10258690.aspx

So, the weird thing is if an application passes in neither 
FILE_FLAG_SEQUENTIAL_SCAN or FILE_FLAG_RANDOM_ACCESS, then the underlying API 
goes into some sort of heuristic mode where it tries to optimize performance 
based on the calls coming in.  Also, it seems that this behavior may not be 
consistent across Windows OSs.

It has to be said: what a *shit* API this is.  The documentation alone really 
shows you how convoluted it is.  I wouldn't be opposed to going back to the 
more portable way (fopen/fread/fseek) and doing away with this windows specific 
file provider all together.

Original comment by kid...@gmail.com on 19 Mar 2012 at 1:45

GoogleCodeExporter commented 8 years ago
Going back to fopen/fread/fseek has some pretty big drawbacks too:  can't read 
filenames with non-ascii characters and can't read filenames that live in long 
paths.

I've been super delinquent in not trying Dan's patch for ages.  There may be 
some light at the end of the tunnel for me but given my past performance, is 
there someone else who can take a look at his patch and work through any 
changes (if there are any) with him and get it committed?

Thanks much.

-DB

Original comment by dbyr...@gmail.com on 19 Mar 2012 at 3:19

GoogleCodeExporter commented 8 years ago
Before we do any patches, somebody needs to at least try 
FILE_FLAG_SEQUENTIAL_SCAN to see if that resolves the performance issues.

Original comment by jnor...@logitech.com on 19 Mar 2012 at 3:59

GoogleCodeExporter commented 8 years ago
@dbyron,

I was under the impression that fopen accepts UTF8?  If that's true, seems like 
you could read filenames with non-ascii characters, but maybe I'm misinformed 
about this.

Original comment by kid...@gmail.com on 21 May 2012 at 4:58

GoogleCodeExporter commented 8 years ago
I don't think fopen accepts UTF-8.  See the attached test program.  I created 
three text files corresponding to the three files it tries to open.  fopen 
opens the one with the ascii name, but not the ascii ones.

Original comment by dbyr...@gmail.com on 21 May 2012 at 8:08

Attachments:

GoogleCodeExporter commented 8 years ago
Hi,

I use _wfopen() to handle Unicode filenames.
_wfopen() can't handle very long paths (longer than MAX_PATH characters), but I
think it's acceptable because Explorer can not also handle such long paths.
-- 
Ken Takata

Original comment by ktakata6...@gmail.com on 10 Jul 2013 at 11:49

Attachments:

GoogleCodeExporter commented 8 years ago
I just tested out using _wfopen, and it has as good a performance as my fix, 
and since it's an OS API instead of my home rolled buffering, it seems like the 
right answer.  Can someone put this in the next release?

Dan

Original comment by danahins...@gmail.com on 5 Aug 2013 at 4:09

GoogleCodeExporter commented 8 years ago
happily, if I can get it as a diff?

Original comment by kid...@gmail.com on 16 Aug 2013 at 8:48

GoogleCodeExporter commented 8 years ago
OK, I have attached the diff.
-- 
Ken Takata

Original comment by ktakata6...@gmail.com on 16 Aug 2013 at 9:47

Attachments:

GoogleCodeExporter commented 8 years ago
Hi,

I have updated the patch for R507.
--
Ken Takata

Original comment by ktakata6...@gmail.com on 13 Jan 2015 at 12:52

Attachments: