odin1314 / yara-project

Automatically exported from code.google.com/p/yara-project
Apache License 2.0
0 stars 0 forks source link

Optimize memory usage when scanning processes and files #33

Open GoogleCodeExporter opened 9 years ago

GoogleCodeExporter commented 9 years ago
This is a feature request not a bug.

It would be helpful if Yara would not load entire files into memory to scan 
them but step through them with an sizeable overlap to avoid false negatives. 
It would also be very helpful if Yara would use a similar chunking method to 
scan processes because files can at least be scanned in as chunks of data by 
the controlling process.

Basically Yara's memory usage needs to have some limitation.

Original issue reported on code.google.com by zeroStei...@gmail.com on 20 Dec 2011 at 6:01

GoogleCodeExporter commented 9 years ago
YARA uses memory-mapped files. If the scanned file is big your virtual memory 
usage will grow a lot, but that doesn't means that your physical memory usage 
should be the same, that depends on the available RAM and file size, the 
operating system will do it's magic to remove from physical memory the portions 
of the file that are not in use. 

Process scanning is done in chunks, however the chunks can be big if the target 
process have big blocks of contiguous memory allocated. I think there is some 
room for improvement here.

Original comment by plus...@gmail.com on 22 Dec 2011 at 10:06