Closed GoogleCodeExporter closed 9 years ago
Hi Martin,
Thanks for your enthusiasm.
It makes quite a while, I haven't made evolve the code,
and there are some clean-ups, and a few other things I'd like to add.
Maybe your requests, may comprise the objectives of a version 2.5.
1) Can you specify exactly what you want to do / mean by opening an image
sequences ?
Is this because, you want to encode them in a movie ? Or is it just for using
them as a movie, and being able to "play" and "seek" the image sequence...
I will have a look at the way these kind of things are implemented in the
ffmpeg source.
2) I believe the alpha channel should not be difficult to retrieve with the
current implementation, and somehow, I definitively have to write a man page
with more example for pyffmpeg.
Basically by specifying a track selector, you can select the pixel_format, you want to be decoded for you, (you may also specify the size of the image if you are just interested in thumbnail for instance
by setting 'dest_width' and 'dest_height'. :
TS_VIDEO_RGB32={ 'video1':(0, -1, {'pixel_format':PixelFormats.RGB32})}
## create the reader object
mp=FFMpegReader()
## open an video file
vf=sys.argv[1]
mp.open(vf,TS_VIDEO_RGB32)
tracks=mp.get_tracks()
vt=tracks[0]
for x in range(10,20):
vt.seek_to_frame(x)
image=vt.get_current_frame()[2]
#...
However, I have never played any movie with alpha channel, so I haven't checked
the absence of bug on this feature.
3) Regarding the fact of accepting donations, if you are interested in some
improvements, I have a paypal account, we have to discuss more directly about
what are the improvements you want, I do believe pyffmpeg still need to
improved. So of course being supported for this, and knowing there is a clear
interest for some feature will help me to dedicate some time to these issue.
If you still have needs and if I think it is reasonable for me
to start working on the version 2.5 now, we may find some agreement.
Thanks for your post,
Bertrand Nouvel
Original comment by bertrand...@gmail.com
on 6 Oct 2010 at 1:51
Hello,
my goal is to write simple composition prototyping application, which takes
multiple video sources (with or without alpha channel) and allows to compose
them using OpenGL in the 3d scene with simple color corrections and so on. More
or less like PD Player ( http://pdplayer.com/ ), which have quite strict
licencing politics. As i don't need realtime playback of dozens of HD
uncompressed streams, I think using pyffmpeg would be good idea.
Image sequeces can be used in CLI ffmpeg: for example if you have 100 tga
images named like frame001.tga, frame002.tga ... frame100.tga , you can use
ffmpeg -i frame%03.tga (where %03 means 3-digits number) and ffmpeg will read
it as a movie. Of course I can write another reader for image files, but if I
could use just one for movies and sequences, it would be much easier (and I
believe ffmpeg reader would be much faster than for example PIL).
The same version of ffmpeg I've used for pyffmpeg build (downloaded yesterday
from ffmpeg.org) can open Image sequences as well as quicktime files with alpha:
Input #0, image2, from 'e:\render\amr\jingle02\test%04d.tga':
Duration: 00:00:04.52, start: 0.000000, bitrate: N/A
Stream #0.0: Video: targa, bgra, 1024x576, 25 tbr, 25 tbn, 25 tbc
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'e:\render\amr\test_alpha.mov':
Duration: 00:00:04.70, start: 0.000000, bitrate: 293061 kb/s
Stream #0.0(eng): Video: targa, bgra, 1024x576, 24 tbr, 24 tbn, 24 tbc
but if I try to open media with alpha using FFMpegReader.open, I get:
File "E:\Programming\2\graphslave.py", line 193, in __init__
self.mp.open("../jingle02/test%4d.tga")
File "pyffmpeg.pyx", line 1917, in pyffmpeg.FFMpegReader.open (pyffmpeg.c:11305)
NameError: AVFMT_NOFILE
or
File "E:\Programming\2\graphslave.py", line 194, in __init__
self.mp.open("E:\\render\\amr\\test_alpha.mov")
File "pyffmpeg.pyx", line 1947, in pyffmpeg.FFMpegReader.open (pyffmpeg.c:11691)
File "pyffmpeg.pyx", line 2036, in pyffmpeg.FFMpegReader.__finalize_open
(pyffmpeg.c:12379)
IOError: Unable to find specified Track
so I guess pyffmpeg have no bindings for this (bgra) pixel format and image
sequences?
If I try suggested:
TS_VIDEO_RGB32={ 'video1':(0, -1, {'pixel_format':PixelFormats.RGB32})}
mp.open(vf,TS_VIDEO_RGB32)
with QT movie, application crash during mp.open (on windows it shows typical
Application encountered error... message with no details. No exception is
thrown by python). I haven't tested it on Linux yet.
You can download my test files here:
http://martas.imm.cz/private/test_alpha.mov
http://martas.imm.cz/private/sequence.zip
I'd love to participate on pyffmpeg more than just advisor, but so far I have
no experience with cython and this kind of bindings. Recently I've tried to
write my own wrapper around libavcodec and libavformat using ctypes, but
without luck.
Regards
Martin
Original comment by martin...@gmail.com
on 6 Oct 2010 at 1:17
Thanks a lot Martin.
I will have a look at your files, soon, and I will see what can be done to make
them work.
Original comment by bertrand...@gmail.com
on 7 Oct 2010 at 12:01
Thanks Bertrand,
I'm looking forward for your results.
Original comment by martin...@gmail.com
on 8 Oct 2010 at 11:50
Hi Martin,
Sorry for late fix..
The new PyFFMPEG2.1 beta now supports alpha, and image sequences (at least on
the video I tried). The previous version was not supporting alpha because some
constant had changed...
Thanks for pointing out these lack of the previous release.
Bertrand
Original comment by bertrand...@gmail.com
on 12 Jan 2011 at 4:00
Great news. I've tried both and it no longer crashs. Still - i am able to open
media with alpha, but i cannot see alpha channel itself, but i guess it is the
ffmpeg issue.
Thanks
Original comment by martin...@gmail.com
on 14 Jan 2011 at 4:14
Hi Martin,
I may have forgotten to include the sample using the alpha.
Basically, you may need to do something like this :
TS_VIDEO_RGB32={ 'video1':(0, -1, {'pixel_format':PixelFormats.BGRA})}
mp=FFMpegReader(0,False)
vf=sys.argv[1]
mp.open(vf,TS_VIDEO_RGB32)
tracks=mp.get_tracks()
ld=LazyDisplayQt()
tracks[0].set_observer(ld.f)
mp.run()
If you are using numpy the 4th layer of your image will be the alpha channel
itself.
If you are using the QtDisplay, by setting a background image,
you may ensure that the image are drawn with some transparency with the
previous image.
Bertrand
Original comment by bertrand...@gmail.com
on 15 Jan 2011 at 1:23
Original issue reported on code.google.com by
martin...@gmail.com
on 5 Oct 2010 at 6:16