OpenNewsLabs / autoEdit_2

Fast text based video editing, node Electron Os X desktop app, with Backbone front end.
https://opennewslabs.github.io/autoEdit_2/
MIT License
421 stars 56 forks source link

Tweet a clip selection - new feature #74

Open pietrop opened 6 years ago

pietrop commented 6 years ago

At textAV 2018 got a chance to work on "Full Fact - tweet that clip" and to refactor and abstract with @jamesdools quickQuoteNode (node refactor of original rails app quickQuote), twitter module, as standalone module tweet-that-clip

I've then played around with a branch for autoEdit2 -tweet-that-clip to see if it was possible to integrate the functionality with autoEdit.

Needs a bit of cleaning up and tidying up but got the basic to work. can make a selection and tweet a clip, see screenshot

screen shot 2018-09-25 at 23 22 31


Possible room for improvement

pietrop commented 6 years ago

Figured out how to convert audio to video. Seems like you need to add an image, and then loop over it.

eg

ffmpeg -loop 1 -i 20180919_053412_0.jpg -i test-audio.m4a -shortest short-outputfile.mp4

and using fluent ffmpeg

/**
 * Converting this ffmpeg compand into a fluent-ffmpeg one to make a module out of it
 * ffmpeg -loop 1 -i 20180919_053412_0.jpg -i test-audio.m4a  -shortest  short-outputfile.mp4
 */
const ffmpeg = require('fluent-ffmpeg');

let videoSrc = 'test-audio.m4a';
let imageSrc = '20180919_053412_0.jpg';
let outpuFileName= 'short-outputfile-fluent.mp4';
// multiple inputs https://github.com/fluent-ffmpeg/node-fluent-ffmpeg#mergetofilefilename-tmpdir-concatenate-multiple-inputs
ffmpeg(videoSrc)
.input(imageSrc)
.loop()
.output(outpuFileName)
.withVideoCodec('libx264')
// shortest is used to avoid looping indefinitely
.addOptions(['-shortest'])
// .withVideoBitrate(1024)
.withAudioCodec('aac')
// when done executing returning output file name to callback
.on('end', function() { 
    console.log(`Done processing ${outpuFileName}`);
    // callback(outputName);
 })
.run();

more details here https://trello.com/c/AjGPSIvf

Need to decide if just use a default black image, or whether to allow to supply an image? and/or add captions to the file? wave form? don't relly want to "re-invent" audiogram... so something simple, and modular would be good. Suggestions welcome.

pietrop commented 6 years ago

Alternatively if going down the generate an audio wave form video from audio file, then no need to do the looping of the image. (could just add captions)

This command seem to work

ffmpeg -i test-audio.m4a -filter_complex "[0:a]showwaves=s=1920x1080:mode=line,format=yuv420p[v]" -map "[v]" -map 0:a -c:v libx264 -c:a copy output-wave.mp4

see video example of effect https://youtu.be/jrY9h9OFBCM

pietrop commented 6 years ago

Next steps considering to add support for audio

pietrop commented 6 years ago

made some improvement on tweet-that-clip npm module to enable functionalities discussed above. https://github.com/pietrop/tweet-that-clip/pull/1.

Got adding support for burnt in captions and audio (with animated wave form) to work.

However going to try and do some refactoring/cleaning up before merging to master.

It will then be possible to update the module in autoEdit, and try and integrate it (eg generating an srt from the clip selection burn in the captions etc..)

Suggestions and help welcome :)

pietrop commented 6 years ago

updated to TweetThatClip 2.0.1 and started integration