selsamman / react-native-transcode

Video Transcoder for React Native
Apache License 2.0
11 stars 6 forks source link

React-Native-Transcoder

This native library provides video composition capabilities for Android and IOS. With it you can:

On IOS the library uses the AVFoundation classes and more specifically the AVVideoComposition class to compose a final composition. On Android selsamman/react-native-transcoder is used which transcodes using the MediaCodec native capabilities for hardware accelerated transcoding free of ffmpeg.

Android and IOS Installation

yarn add react-native-transcode
react-native link (if using react-native versions below 60)

Additional Installation Needed for Android

The Android version uses the jitpack.io repo for it's transcoding binary and so you need to add this after the mavenlocal() line in android/build.gradle

maven {
           name "jitpack"
           url "https://jitpack.io"
       }

You also need to set minSdkVersion to 21 in android/build.gradle since the android transcoder requires newer APIs. This means backwards compatibility down to Lollipop.

Usage


       const poolCleanerInputFile = await this.prepFile('poolcleaner.mp4');
       const outputFile = RNFetchBlob.fs.dirs.DocumentDir + '/output_' + Hopscotch.displayName + '.mp4';

       await Transcode.start()
             .asset({name: "A", path: poolCleanerInputFile})
             .asset({name: "B", path: poolCleanerInputFile})

             .segment(500)
                 .track({asset: "A"})

             .segment(500)
                 .track({asset: "A", filter: "FadeOut"})
                 .track({asset: "B", filter: "FadeIn", seek: 750})

             .segment(500)
                 .track({asset: "B"})

             .segment(500)
                 .track({asset: "B", filter: "FadeOut"})
                 .track({asset: "A", filter: "FadeIn", seek: 500})

             .segment(500)
                 .track({asset: "A"})

             .process("low", outputFile, (progress)=>{progressCallback(progress)});

API

The API uses function chaining to specify the video composition to be created. It consists of

To initiate the composition you use Transcode.start() which returns a promise when the transcode is complete. Then you chain on the assets and segments, finally chaining the process call to initiate the transcoding and create the composition.

await Transcode.start()

asset

Any video assets used in the transcoding must first be added using the asset function.

.asset({name: String, path: String, type: String})

segment

Multiple segments be attached to the TimeLine to define the individual sequential portions of the composition. You create a segment by calling createSegment.

    .segment(Number)

This creates a time segment with a specific duration specified in milliseconds. If duration is omitted the entirety of the remaining stream is processed. track calls are chained to the segment call to define the individual tracks to be decoded during this particular segment.

    .track({asset: String, filter: String, seek: Number, duration: Number})

process

The last function to be chained is process which defines the output resolution and file.

    .process (String, String, function);

Copyright (C) 2016-2019 Sam Elsamman

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.