Closed Umcaruje closed 10 months ago
Found a cached page here: [ 2458375 ] Open large WAV file
Submitted By: Nobody/Anonymous - nobody
Date Submitted: 2008-12-22 11:59
Last Updated By: Item Submitter - Tracker Item Submitted
Date Last Updated: No updates since submission
Number of Comments: 0
Number of Attachments: 0
Category: (?) Crash
Group: (?) Current Release
Assigned To: (?) Nobody/Anonymous
Priority: (?) 5
Status: (?) Open
Resolution: (?) None
Summary: (?) Open large WAV file
Private: (?) No
From the "my computer" browser in LMMS, open a 1.5 hour long WAV file.
After a while, LMMS will crash.
That's pretty unspecific. Maybe just beef up the size of the file or set it relative to available ram? Can you make it crash if you remove the limit?
@Umcaruje we can bump it to 200MB if you like, but I just tested against a 900MB WAV and there are some usability issues with it.
Sample Track (900MB WAV)
Audio File Processor (900MB WAV)
Biggest observed problems
@zonkmachine good find. It inspired me to test this out, since there was no specific reasoning mentioned in the cached bug report for the 100MB ceiling.
@tresf How much of an increase in RAM usage is there with the 2Gb spike/what's your idle usage?
Would it be possible to have a RAM limit setting in LMMS like Blender has for the VSE? Someone with 4Gb of RAM might not want >2Gb used by LMMS, but I wouldn't mind 10-12Gb usage (if its necessary for a big sample or speeds things up) as I have 16Gb RAM.
buried on sourceforge somewhere.
If the ticket system in SourceForge is reenabled, will the bug archive come back?
Regarding the 100 MiB limit (which limits the file size, not the uncompressed audio), my idea was to use a file reader in those cases: temporary uncompressed file, file seeks, etc.
Potential inefficiencies in re-draw routines
Yeah I talked with @BaraMGB about this, the drawing routine is very inefficient when dealing with larger samples, as we're drawing this huge polyline for every sample point, which works very well with smaller samples, but eats up the CPU for larger ones.
we can bump it to 200MB
Or we actually make it unlimited and warn about instability, maybe? Databenders would probably appreciate this, and later on we could implement @jasp00's idea, or set the limit relative to RAM as @zonkmachine suggested
Yeah I talked with @BaraMGB about this, the drawing routine is very inefficient when dealing with larger samples, as we're drawing this huge polyline for every sample point, which works very well with smaller samples, but eats up the CPU for larger ones.
I had a deeper look into the drawing routine in the meantime. It isn't that much inefficient as I thought. It calculate the accuracy (frames per pixels) dynamically according to the zoom level https://github.com/LMMS/lmms/blob/master/src/core/SampleBuffer.cpp#L914 But the whole paint event is called many times. Every time the song editor is resized for example( unnecessary)
Perhaps a better way were to calculate the drawing once and save it into a QPainter.path. But drawing paths costs cpu time too. I haven't tested this.
Perhaps a better way were to calculate the drawing once and save it into a QPainter.path. But drawing paths costs cpu time too. I haven't tested this.
Ideally, it should be threaded off.
we can bump it to 200MB
Or we actually make it unlimited and warn about instability,
Both. We... a - bump the limit to 200MB and... b - turn the limit into a warning or maybe a yes/no box. But allow you to try any size... c - improve drawing (threaded, as @tresf sugest above)
I think @Umcaruje should go ahead and bump up the limit to 200MB right now for rc2. It could reduce the number of users that ever see this message conciderably. People who are on weaker machines, like I am, already expect things to slow down once in a while.
I think @Umcaruje should go ahead and bump up the limit to 200MB right now for rc2
Or alleviate it completely, open enhancement for the other items identified. > 90min/900MB waves are quite rare when composing. In 2009, the average ram was around 1GB. The crash should never occur, just very bad UX.
Just drop the file limit and join the new millennium.
Ok I'll drop the limit, but I think we should keep this issue open, until the UX is fixed.
I rendered out 2 hours of white noise in audacity... but 100MB is very limiting when dealing with uncompressed audio
@Umcaruje , At the risk of being obtuse, who in the world is going to load a file that big in LMMS? The biggest WAV files I encounter are nowhere near that.
who in the world is going to load a file that big
One of the users
Per https://www.facebook.com/groups/GACToolTime/permalink/1696492363975805/, per the OP.
I just removed the check, and I get a crash every time I want to load a ~600MB wav file:
Large sample files can cause system instabilities
[Thread 0x7fffa53a4700 (LWP 12896) exited]
[Thread 0x7fffa61c0700 (LWP 12895) exited]
MemoryManager.cpp: Couldn't allocate memory: 39855375 chunks asked
Thread 1 "lmms" received signal SIGABRT, Aborted.
0x00007ffff4244428 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:54
54 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0 0x00007ffff4244428 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:54
#1 0x00007ffff424602a in __GI_abort () at abort.c:89
#2 0x00007ffff67fcf15 in qt_message_output(QtMsgType, char const*) () from /usr/lib/x86_64-linux-gnu/libQtCore.so.4
#3 0x00007ffff67fd371 in ?? () from /usr/lib/x86_64-linux-gnu/libQtCore.so.4
#4 0x00007ffff67fdc91 in qFatal(char const*, ...) () from /usr/lib/x86_64-linux-gnu/libQtCore.so.4
#5 0x0000000000508c71 in MemoryManager::alloc(unsigned long) ()
#6 0x0000000000524e51 in SampleBuffer::directFloatWrite(float*&, int, int) ()
#7 0x000000000052500b in SampleBuffer::decodeSampleSF(char const*, float*&, unsigned char&, unsigned int&) ()
#8 0x0000000000526b89 in SampleBuffer::update(bool) ()
#9 0x000000000052713d in SampleBuffer::setAudioFile(QString const&) ()
#10 0x00000000005f03e0 in SampleTCO::setSampleFile(QString const&) ()
#11 0x00000000005f04f5 in SampleTCOView::mouseDoubleClickEvent(QMouseEvent*) ()
#12 0x00007ffff70e2450 in QWidget::event(QEvent*) () from /usr/lib/x86_64-linux-gnu/libQtGui.so.4
#13 0x00007ffff708afdc in QApplicationPrivate::notify_helper(QObject*, QEvent*) () from /usr/lib/x86_64-linux-gnu/libQtGui.so.4
#14 0x00007ffff70920d6 in QApplication::notify(QObject*, QEvent*) () from /usr/lib/x86_64-linux-gnu/libQtGui.so.4
#15 0x00007ffff691790d in QCoreApplication::notifyInternal(QObject*, QEvent*) () from /usr/lib/x86_64-linux-gnu/libQtCore.so.4
#16 0x00007ffff70916dd in QApplicationPrivate::sendMouseEvent(QWidget*, QMouseEvent*, QWidget*, QWidget*, QWidget**, QPointer<QWidget>&, bool) () from /usr/lib/x86_64-linux-gnu/libQtGui.so.4
#17 0x00007ffff710f3f2 in ?? () from /usr/lib/x86_64-linux-gnu/libQtGui.so.4
#18 0x00007ffff710ec83 in QApplication::x11ProcessEvent(_XEvent*) () from /usr/lib/x86_64-linux-gnu/libQtGui.so.4
#19 0x00007ffff7138542 in ?? () from /usr/lib/x86_64-linux-gnu/libQtGui.so.4
#20 0x00007ffff3aec1a7 in g_main_context_dispatch () from /lib/x86_64-linux-gnu/libglib-2.0.so.0
#21 0x00007ffff3aec400 in ?? () from /lib/x86_64-linux-gnu/libglib-2.0.so.0
#22 0x00007ffff3aec4ac in g_main_context_iteration () from /lib/x86_64-linux-gnu/libglib-2.0.so.0
#23 0x00007ffff69482ae in QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib/x86_64-linux-gnu/libQtCore.so.4
#24 0x00007ffff7138616 in ?? () from /usr/lib/x86_64-linux-gnu/libQtGui.so.4
#25 0x00007ffff691618f in QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib/x86_64-linux-gnu/libQtCore.so.4
#26 0x00007ffff69164f5 in QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib/x86_64-linux-gnu/libQtCore.so.4
#27 0x00007ffff691c4b9 in QCoreApplication::exec() () from /usr/lib/x86_64-linux-gnu/libQtCore.so.4
#28 0x00000000004b7025 in main ()
At first I thought it was because I didn't have enough ram available, but crash happened even when I had around 1,5GB of ram available.
Edit: OS info: Elementary OS Loki, Ubuntu 16.04 based, running on a laptop with an intel core i7 + 8GB of RAM
The error message states that it tries to allocate 39855375 new chunks. That number of chunks is determined in the following line in MemoryManager
:
int requiredChunks = size / MM_CHUNK_SIZE + ( size % MM_CHUNK_SIZE > 0 ? 1 : 0 );
MM_CHUNK_SIZE
is defined as 64 in MemoryManager.h
. Assuming that this is 64 bytes this means that for some reason it tries to allocate 39855375*64 = 2550744000 ~ 2432.5 MBs for that file.
I wonder whether the MemoryManager
is something useful in the first place (at least in that implementation). Doing a linear search over some pools even for a small allocation might not be the best for performance. Was it ever tested whether the manager brings real benefits?
Was it ever tested whether the manager brings real benefits?
Here's some dialog on the testing that was performed: https://github.com/LMMS/lmms/pull/1088#issuecomment-53958448
And feedback from @jasp00 https://github.com/LMMS/lmms/pull/1088#issuecomment-241169282
#4 0x00007ffff67fdc91 in qFatal(char const*, ...) () from /usr/lib/x86_64-linux-gnu/libQtCore.so.4
#5 0x0000000000508c71 in MemoryManager::alloc(unsigned long) ()
#6 0x0000000000524e51 in SampleBuffer::directFloatWrite(float*&, int, int) ()
#7 0x000000000052500b in SampleBuffer::decodeSampleSF(char const*, float*&, unsigned char&, unsigned int&) ()
SampleBuffer::directFloatWrite()
tries to allocate memory in:
https://github.com/LMMS/lmms/blob/master/src/core/MemoryManager.cpp#L53:L56
void * MemoryManager::alloc( size_t size )
{
if( !size )
{
return NULL;
}
But it doesn't look like we test to see if the operation worked: https://github.com/LMMS/lmms/blob/919ca8b4d71a79cfb899b0a81c8bcc49cc8d08b3/src/core/SampleBuffer.cpp#L315:L320
void SampleBuffer::directFloatWrite ( sample_t * & _fbuf, f_cnt_t _frames, int _channels)
{
m_data = MM_ALLOC( sampleFrame, _frames );
const int ch = ( _channels > 1 ) ? 1 : 0;
Suggestion to test m_data: Edit: pseudocodish, I can't test any of this right now
bool SampleBuffer::directFloatWrite ( sample_t * & _fbuf, f_cnt_t _frames, int _channels)
{
m_data = MM_ALLOC( sampleFrame, _frames );
if ( ! m_data )
{
return false;
}
else
{
const int ch = ( _channels > 1 ) ? 1 : 0;
...
SampleBuffer
should use new
/delete
instead of MemoryManager
because allocation is not part of the mixer loop. The exceptions are the metronome ticks, but those SamplePlayHandle
s should use already loaded samples rather than loading each time.
Freeze/crash on opening a ~93 MB file that is 5 hour lo-fi. So we may have to test the actual length in samples too.
bt
...
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x00000000005458ee in SampleBuffer::directFloatWrite (this=0x7fb740e00fa0, _fbuf=@0x7ffcf95eaae0: 0x7fb65aa57010, _frames=592879931, _channels=1)
at /home/zonkmachine/builds/lmms/lmms/src/core/SampleBuffer.cpp:337
337 m_data[frame][0] = _fbuf[idx+0];
(gdb) bt
#0 0x00000000005458ee in SampleBuffer::directFloatWrite (this=0x7fb740e00fa0, _fbuf=@0x7ffcf95eaae0: 0x7fb65aa57010, _frames=592879931, _channels=1)
at /home/zonkmachine/builds/lmms/lmms/src/core/SampleBuffer.cpp:337
#1 0x0000000000545b8e in SampleBuffer::decodeSampleSF (this=0x7fb740e00fa0, _f=0x3aefba0 "/home/zonkmachine/Music/5hourlofinoise.ogg",
_buf=@0x7ffcf95eaae0: 0x7fb65aa57010, _channels=@0x7ffcf95eaaaa: 1 '\001', _samplerate=@0x7ffcf95eaaac: 44100)
at /home/zonkmachine/builds/lmms/lmms/src/core/SampleBuffer.cpp:415
...
Also. After refusing to open a file that is over 100 MB it will crash when I delete the track that failed to load the file or closes lmms.
Program terminated with signal SIGABRT, Aborted.
#0 0x00007f4c794cec37 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
56 ../nptl/sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0 0x00007f4c794cec37 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1 0x00007f4c794d2028 in __GI_abort () at abort.c:89
#2 0x00007f4c7bc48c92 in qt_message_output (msgType=msgType@entry=QtFatalMsg,
buf=0x31ffd18 "MemoryManager: Couldn't find pointer info for pointer: 0x7f4c70219020") at global/qglobal.cpp:2383
#3 0x00007f4c7bc48ff9 in qt_message(QtMsgType, const char *, typedef __va_list_tag __va_list_tag *) (msgType=msgType@entry=QtFatalMsg,
msg=0x6400c0 "MemoryManager: Couldn't find pointer info for pointer: %p", ap=ap@entry=0x7ffe9c314f88) at global/qglobal.cpp:2429
#4 0x00007f4c7bc49804 in qFatal (msg=<optimized out>) at global/qglobal.cpp:2612
#5 0x000000000052456d in MemoryManager::free (ptr=0x7f4c70219020) at /home/zonkmachine/builds/lmms/lmms/src/core/MemoryManager.cpp:122
#6 0x00000000005450d4 in SampleBuffer::~SampleBuffer (this=0x7f4c70218fa0, __in_chrg=<optimized out>)
at /home/zonkmachine/builds/lmms/lmms/src/core/SampleBuffer.cpp:155
#7 0x0000000000545148 in SampleBuffer::~SampleBuffer (this=0x7f4c70218fa0, __in_chrg=<optimized out>)
at /home/zonkmachine/builds/lmms/lmms/src/core/SampleBuffer.cpp:156
#8 0x0000000000523c09 in sharedObject::unref<SampleBuffer> (object=0x7f4c70218fa0) at /home/zonkmachine/builds/lmms/lmms/include/shared_object.h:64
#9 0x00000000006202bb in SampleTCO::~SampleTCO (this=0x7f4c70218ce0, __in_chrg=<optimized out>)
at /home/zonkmachine/builds/lmms/lmms/src/tracks/SampleTrack.cpp:114
#10 0x0000000000620312 in SampleTCO::~SampleTCO (this=0x7f4c70218ce0, __in_chrg=<optimized out>)
at /home/zonkmachine/builds/lmms/lmms/src/tracks/SampleTrack.cpp:115
#11 0x0000000000558743 in Track::~Track (this=0x7f4c702150a0, __in_chrg=<optimized out>) at /home/zonkmachine/builds/lmms/lmms/src/core/Track.cpp:1945
#12 0x0000000000622662 in SampleTrack::~SampleTrack (this=0x7f4c702150a0, __in_chrg=<optimized out>)
at /home/zonkmachine/builds/lmms/lmms/src/tracks/SampleTrack.cpp:573
#13 0x00000000006226a0 in SampleTrack::~SampleTrack (this=0x7f4c702150a0, __in_chrg=<optimized out>)
at /home/zonkmachine/builds/lmms/lmms/src/tracks/SampleTrack.cpp:576
#14 0x00000000005b5900 in TrackContainerView::deleteTrackView (this=0x29f6f80, _tv=0x210a610)
at /home/zonkmachine/builds/lmms/lmms/src/gui/TrackContainerView.cpp:277
I have a fix for both of the last crashes above but need to tidy it up "a bit". Basically just handle back a single byte frame, inform and close the file.
@Umcaruje wrote
the drawing routine is very inefficient when dealing with larger samples, as we're drawing this huge polyline for every sample point, which works very well with smaller samples, but eats up the CPU for larger ones.
what would he result be, if there was no pattern drawing, for files of a specified max size. If the lag that @tresf experienced is due to the pattern being drawn, then it could be acceptable not drawing huge pattern (imo) The limit for 'dont-draw' could be user selected in settings. In that way, those with superior hw, could use that to max, and those with inferior hw, would not be resticted natively.
~~Would it be reasonably to grab a set number of samples for display, spaced out over the length of the audio? Files with <=X samples would be drawn with one segment per sample, but a file with >X samples would be approximated by X samples.~~
It's already drawing based on zoom level
@musikBear if you just read the next comment by @BaraMGB:
I had a deeper look into the drawing routine in the meantime. It isn't that much inefficient as I thought. It calculate the accuracy (frames per pixels) dynamically according to the zoom level https://github.com/LMMS/lmms/blob/master/src/core/SampleBuffer.cpp#L914
So the drawing routine is not as bad as I tought. The real issue here is that LMMS crashes when I import a large file. And that's the main thing that needs to get fixed.
| So the drawing routine is not as bad as I tought.
I wonder if it's necessary in most cases to draw lines for both channels? The maximum or average should be pretty enough, at least for the Song Editor.
And then there's the small detail of having one QPointF
for each horizontal pixel (if I'm reading this right). The resolution could probably be lowered without any real problems as long as there's no audio editing with sample precision.
@Umcaruje in lieu of https://github.com/LMMS/lmms/pull/3293, shall we close this as wont-fix for now?
@tresf well this is still a problem though, LMMS should allow samples of any size, this is just saving that problem to be adressed later. I could probably edit out this issue or make a new one along the lines of 'problems when operating with big sample files'.
@Umcaruje I agree. There are two issues that I see...
Since performance problems in general are already plaguing LMMS currently the decision to file this as an individual bug is up to you.
In regards to the original bug report, the investigation as well as a conscious decision (#3293) was made to keep it, so for the foreseeable future, we won't be fixing this problem, which is why I've asked.
well this is still a problem though, LMMS should allow samples of any size, this is just saving that problem to be adressed later.
I agree. I think this deserves a new bug. I didn't intend to close the door on large files but in case this gets to be a lengthy thing those issues needed fixing. Also, I just stayed with 100MB and a proportional sample length check. This was based on my computers limits and the earlier values as they agreed with each other. You could perhaps make this system dependent, at least to some degree, so users with stronger machines can use them full out.
Edit: OT comment removed
There are two issues that I see...
Large/long sample support (may be upstream bug).
QTractor is using libsndfile too.
Sorry. I seem to have drawn some wrong conclusions about file sizes. I'm now running two 300MB files side by side with a 90 minute sample and it works just fine. A bit slow to load of course. I suggest to just bump the limits to some new maximum like at least 300 MB and 90 minutes.
I'm thinking of just increasing the limits a bit. 300 MB/ 90 minutes?
@tresf well this is still a problem though, LMMS should allow samples of any size...
Instead of just a message we could have a warning over a certain size and an OK/Cancel button so it's up to the user to proceed and test the limits of their own system.
This issue popped up as I was browsing the Glitch Artists Collective: Tool Time facebook group. One of the users said it was unable to load files that were >100MB, so I rendered out 2 hours of white noise in audacity and tried to load it, and voilà, this message popped up in the terminal:
This error is generated in
SampleBuffer.cpp
where the loading breaks if the file is too large: https://github.com/LMMS/lmms/blob/919ca8b4d71a79cfb899b0a81c8bcc49cc8d08b3/src/core/SampleBuffer.cpp#L204-L208Now, this change was added by @tobydox back in '09 to fix a bug -
#2458375
, which I can't find and it's either forever gone or buried on sourceforge somewhere.Now, I don't want to blindly remove that break without knowing the context of the bug, but 100MB is very limiting when dealing with uncompressed audio, not just for the purpose of databending, but also when remixing longer tracks and importing stems, so input on this is appreciated.