Closed the13thson closed 11 years ago
I would expect that if you have any output video that means you have it setup correctly.
What is the exact call you are making and on which exact video? Just the one in the examples?
The FFT plot is in hertz so a spike at 9 would mean 9 hertz (9 times per second). When I run the show_frequencies on the breathing baby I do not have a spike at 9 but one at .5 hertz (once every 2 seconds). Can you post a picture of your graph?
I'm running this line:
eulerian_magnification ('baby.mp4', freq_min=0.45, freq_max=0.55, amplification=50, gauss_level=4)
Below I have the FFT of the video, using show_frequencies('baby.mp4'), and 2 images showing the intensity differences between the frames of the magnified output video - those frames are about a second apart.
Just to give a little specs:
Thanks for the swift reply.
oh yes, by the way - this is the baby video I'm using just in case we're using different ones: http://people.csail.mit.edu/mrub/vidmag/video/baby.mp4
EDIT: How does the python code distinguish between colour amplification and motion amplification? Is it strictly based on frequency alone? I tried using the wrist video, and got a similar looking graph of the FFT, so decided to count and take an estimated guess as to what the pulse should be and got some result --- the pulse was magnified for 5 seconds in the middle of the 29 second clip, but there were still massive illumination changes. It is as if it's picking up changes in the light of the clip and magnifying it. Is my mistake a fault on the parameters or a fault on the coding (as I am new to python)? I ran this: eulerian_magnification('wrist.mp4', gauss_level=4, freq_min=0.8, freq_max=1.0, amplification=50)
Those graphs aren't right. These are the graphs I get for baby.mp4:
What versions of numpy, scipy, and python are you running?
You are correct in your understanding that the algorithm is amplifying changes in light. The algorithm that best enhances motion uses a laplacian pyramid - I've currently only implemented a gaussian pyramid, which is best suited for color change amplifcation. This is why the wrist example doesn't look as good as MIT's yet.
According to Python, I'm running:
I find it bizarre that our FFT and Pixel Average waveforms can look so different, I wonder what changed (if it is a version problem) between versions. If I need to change the version, please tell me - I installed numpy, scipy and python straight from Ubuntu's Software Centre. Also, I don't know if this would have caused a problem, but I had to change from OpenCV v.2.2.0 to v.2.3.1 for this project, as I was simulating another software for my research up until now.
Ok. When I was looking at the code I couldn't distinguish between the colour and motion amplification, but I understand now.
Thanks for all the help thus far. Any idea on how I could make the program work on my side? Or which video I could use to show off it's ability?
Hi again. So after much hassle, I decided to try out another application, that used Eulerian Magnification through a webcam. This required me to use OpenCV 2.4.1 instead and wanted me to install OpenMDAO v0.5.5 and all it's dependencies.
I got that code to work, (https://github.com/thearn/webcam-pulse-detector) and then decided I'd try out the code again. I got this for my 'baby.mp4' show_frequencies graphs --- Compared this to yours and then ran the eulerian_magnification on baby and got a nice video out.
I then checked my versions and had all the versions of python, numpy, and scipy all the same, so I'm assuming its the OpenCV. If you guys do get it to work with OpenCV 2.3.1, then I rate it could possibly be the bindings, which may have been different. A couple of things --- I installed OpenCV 2.3.1 straight from Ubuntu Software Centre, whereas 2.4.0 I installed from the tar.gz and used 'make' to install it.
Hopes this helps with sorting out any future problems others have.
Thanks for all the helpful information!
Hi
Just to explain, I'm very new to Python, and somewhat new to Ubuntu and OpenCV. I've coded before, so I caught on quickly how the code works and began to implement it, but even when I run the example code, I seem to be getting a flash of bright light that complete blinds the image, followed by complete darkness, and so on.
I'm not too sure what the problem is, as I roughly understand the code and the setup. I thought it may have been the setup and that I don't have the correct versions installed or something, but figured that I would have gotten an error by now.
Also, with regards to the FFT plot displayed by the "show_frequencies" function --- What is this scale related to? I'm using the breathing baby video, and have my FFT spiking at 9? is that 9/30 Hz? I notice you're using a sampling time related to the fps (frames per second?), which is output in the Python Shell when running the "show_frequencies" function.
Any help would be appreciated. Thank you very much. Regards.