A high-level video composition framework build on top of AVFoundation. It's simple to use and easy to extend. Use it and make life easier if you are implementing video composition feature.
This project has a Timeline concept. Any resource can put into Timeline. A resource can be Image, Video, Audio, Gif and so on.
Below is the simplest example. Create a resource from AVAsset, set the video frame's scale mode to aspect fill, then insert trackItem to timeline, after all use CompositionGenerator to build AVAssetExportSession/AVAssetImageGenerator/AVPlayerItem.
// 1. Create a resource
let asset: AVAsset = ...
let resource = AVAssetTrackResource(asset: asset)
// 2. Create a TrackItem instance, TrackItem can configure video&audio configuration
let trackItem = TrackItem(resource: resource)
// Set the video scale mode on canvas
trackItem.configuration.videoConfiguration.baseContentMode = .aspectFill
// 3. Add TrackItem to timeline
let timeline = Timeline()
timeline.videoChannel = [trackItem]
timeline.audioChannel = [trackItem]
// 4. Use CompositionGenerator to create AVAssetExportSession/AVAssetImageGenerator/AVPlayerItem
let compositionGenerator = CompositionGenerator(timeline: timeline)
// Set the video canvas's size
compositionGenerator.renderSize = CGSize(width: 1920, height: 1080)
let exportSession = compositionGenerator.buildExportSession(presetName: AVAssetExportPresetMediumQuality)
let playerItem = compositionGenerator.buildPlayerItem()
let imageGenerator = compositionGenerator.buildImageGenerator()
Timeline
Use to construct resource, the developer is responsible for putting resources at the right time range.
CompositionGenerator
Use CompositionGenerator to create AVAssetExportSession/AVAssetImageGenerator/AVPlayerItem
CompositionGenerator use Timeline instance translate to AVFoundation API.
Resource
Resource provider Image or/and audio data. It also provide time infomation about the data.
Currently support
ImageResource
: Provide a CIImage as video framePHAssetImageResource
: Provide a PHAsset, load CIImage as video frameAVAssetReaderImageResource
: Provide AVAsset, reader samplebuffer as video frame using AVAssetReaderAVAssetReverseImageResource
: Provide AVAsset, reader samplebuffer as video frame using AVAssetReader, but reverse the orderAVAssetTrackResource
: Provide AVAsset, use AVAssetTrack as video frame and audio frame.PHAssetTrackResource
: Provide PHAsset, load AVAsset from it.TrackItem
A TrackItem contains Resource, VideoConfiguration and AudioConfiguration.
Currently support
You can provide custom resource type by subclass Resource
, and implement func tracks(for type: AVMediaType) -> [AVAssetTrack]
.
By subclass ImageResource
, you can use CIImage as video frame.
Image filter need Implement VideoConfigurationProtocol
protocol, then it can be added to TrackItem.configuration.videoConfiguration.configurations
KeyframeVideoConfiguration
is a concrete class.
Audio Mixer need implement AudioConfigurationProtocol
protocol, then it can be added to TrackItem.configuration.audioConfiguration.nodes
VolumeAudioConfiguration
is a concrete class.
AVFoundation aready provide powerful composition API for video and audio, but these API are far away from easy to use.
1.AVComposition
We need to know how and when to connect different tracks. Say we save the time range info for a track, finnaly we will realize the time range info is very easy to broken, consider below scenarios
These operations will affect the timeline and all tracks' time range info need to be updated.
Bad thing is that AVComposition only supports video track and audio track. If we want to combine photo and video, it's very difficult to implement.
2.AVVideoCompostion
Use AVVideoCompositionInstruction
to construct timeline, use AVVideoCompositionLayerInstruction
to configure track's transform. If we want to operate raw video frame data, need implement AVVideoCompositing
protocol.
After I write the code, I realized there are many codes unrelated to business logic, they should be encapsulated.
3.Difffcult to extend features
AVFoundation only supports a few basic composition features. As far as I know, it only can change video frame transform and audio volume. If a developer wants to implement other features, e.g apply a filter to a video frame, then need to rewrite AVVideoCompostion's AVVideoCompositing
protocol. The workload suddenly become very large.
Life is hard why should I write hard code too? So I create Cabbage, easy to understand API, flexible feature scalability.
Cocoapods
platform :ios, '9.0'
use_frameworks!
target 'MyApp' do
# your other pod
# ...
pod 'VFCabbage'
end
Manually
It is not recommended to install the framework manually, but if you have to do it manually. You can
Cabbage/Sources
folder to you project.$ git submodule add https://github.com/VideoFlint/Cabbage.git
Under MIT