shogo4405 / HaishinKit.swift

Camera and Microphone streaming library via RTMP and SRT for iOS, macOS, tvOS and visionOS.
BSD 3-Clause "New" or "Revised" License
2.72k stars 606 forks source link
camera ios macos replaykit rtmp srt streaming tvos visionos

HaishinKit for iOS, macOS, tvOS, visionOS and Android.

GitHub Stars Release Platform Compatibility Swift Compatibility GitHub license GitHub Sponsor

๐Ÿ’– Sponsors



Sponsorship


๐Ÿ’ฌ Communication

๐ŸŒ Related projects

Project name Notes License
HaishinKit for Android. Camera and Microphone streaming library via RTMP for Android. BSD 3-Clause "New" or "Revised" License
HaishinKit for Flutter. Camera and Microphone streaming library via RTMP for Flutter. BSD 3-Clause "New" or "Revised" License

๐ŸŽจ Features

RTMP

SRT(beta)

Offscreen Rendering.

Through off-screen rendering capabilities, it is possible to display any text or bitmap on a video during broadcasting or viewing. This allows for various applications such as watermarking and time display.

Example ```swift stream.videoMixerSettings.mode = .offscreen stream.screen.startRunning() textScreenObject.horizontalAlignment = .right textScreenObject.verticalAlignment = .bottom textScreenObject.layoutMargin = .init(top: 0, left: 0, bottom: 16, right: 16) stream.screen.backgroundColor = UIColor.black.cgColor let videoScreenObject = VideoTrackScreenObject() videoScreenObject.cornerRadius = 32.0 videoScreenObject.track = 1 videoScreenObject.horizontalAlignment = .right videoScreenObject.layoutMargin = .init(top: 16, left: 0, bottom: 0, right: 16) videoScreenObject.size = .init(width: 160 * 2, height: 90 * 2) _ = videoScreenObject.registerVideoEffect(MonochromeEffect()) let imageScreenObject = ImageScreenObject() let imageURL = URL(fileURLWithPath: Bundle.main.path(forResource: "game_jikkyou", ofType: "png") ?? "") if let provider = CGDataProvider(url: imageURL as CFURL) { imageScreenObject.verticalAlignment = .bottom imageScreenObject.layoutMargin = .init(top: 0, left: 0, bottom: 16, right: 0) imageScreenObject.cgImage = CGImage( pngDataProviderSource: provider, decode: nil, shouldInterpolate: false, intent: .defaultIntent ) } else { logger.info("no image") } let assetScreenObject = AssetScreenObject() assetScreenObject.size = .init(width: 180, height: 180) assetScreenObject.layoutMargin = .init(top: 16, left: 16, bottom: 0, right: 0) try? assetScreenObject.startReading(AVAsset(url: URL(fileURLWithPath: Bundle.main.path(forResource: "SampleVideo_360x240_5mb", ofType: "mp4") ?? ""))) try? stream.screen.addChild(assetScreenObject) try? stream.screen.addChild(videoScreenObject) try? stream.screen.addChild(imageScreenObject) try? stream.screen.addChild(textScreenObject) stream.screen.delegate = self ```

Rendering

Features PiPHKView MTHKView
Engine AVSampleBufferDisplayLayer Metal
Publish โœ” โœ”
Playback โœ” โœ”
VisualEffect โœ” โœ”
MultiCamera โœ” โœ”
PictureInPicture โœ”

Others

๐Ÿพ Examples

Examples project are available for iOS with UIKit, iOS with SwiftUI, macOS and tvOS. Example macOS requires Apple Silicon mac.

๐ŸŒ Requirements

Development

Version Xcode Swift
1.9.0+ 15.4+ 5.10+
1.8.0+ 15.3+ 5.9+
1.7.0+ 15.0+ 5.9+
1.6.0+ 15.0+ 5.8+

OS

- iOS tvOS macOS visionOS watchOS
HaishinKit 13.0+ 13.0+ 10.15+ 1.0+ -
SRTHaishinKit 13.0+ 13.0+ 13.0+ 1.0+ -

Cocoa Keys

Please contains Info.plist.

iOS 10.0+

macOS 10.14+

tvOS 17.0+

๐Ÿ”ง Installation

HaishinKit has a multi-module configuration. If you want to use the SRT protocol, please use SRTHaishinKit. HaishinKit SRTHaishinKit
SPM https://github.com/shogo4405/HaishinKit.swift https://github.com/shogo4405/HaishinKit.swift
CocoaPods
source 'https://github.com/CocoaPods/Specs.git'
use_frameworks!

def import_pods
pod 'HaishinKit', '~> 1.9.1
end

target 'Your Target' do
platform :ios, '13.0'
import_pods
end
source 'https://github.com/CocoaPods/Specs.git'
use_frameworks!

def import_pods
pod 'SRTHaishinKit', '~> 1.9.1
end

target 'Your Target' do
platform :ios, '13.0'
import_pods
end
Carthage github "shogo4405/HaishinKit.swift" ~> 1.9.1 Not available.

๐Ÿ”ง Prerequisites

Make sure you setup and activate your AVAudioSession iOS.

import AVFoundation
let session = AVAudioSession.sharedInstance()
do {
    try session.setCategory(.playAndRecord, mode: .default, options: [.defaultToSpeaker, .allowBluetooth])
    try session.setActive(true)
} catch {
    print(error)
}

๐Ÿ““ RTMP Usage

Ingest

let connection = RTMPConnection()
let stream = RTMPStream(connection: connection)

stream.attachAudio(AVCaptureDevice.default(for: .audio)) { _, error in
  if let error {
    logger.warn(error)
  }
}

stream.attachCamera(AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back), track: 0) { _, error in
  if let error {
    logger.warn(error)
  }
}

let hkView = MTHKView(frame: view.bounds)
hkView.videoGravity = AVLayerVideoGravity.resizeAspectFill
hkView.attachStream(stream)

// add ViewController#view
view.addSubview(hkView)

connection.connect("rtmp://localhost/appName/instanceName")
stream.publish("streamName")

Playback

let connection = RTMPConnection()
let stream = RTMPStream(connection: connection)

let hkView = MTHKView(frame: view.bounds)
hkView.videoGravity = AVLayerVideoGravity.resizeAspectFill
hkView.attachStream(stream)

// add ViewController#view
view.addSubview(hkView)

connection.connect("rtmp://localhost/appName/instanceName")
stream.play("streamName")

Authentication

var connection = RTMPConnection()
connection.connect("rtmp://username:password@localhost/appName/instanceName")

๐Ÿ““ SRT Usage

Ingest

let connection = SRTConnection()
let stream = SRTStream(connection: connection)
stream.attachAudio(AVCaptureDevice.default(for: .audio)) { error in
    // print(error)
}
stream.attachCamera(AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back), track: 0) { _, error in
  if let error {
    logger.warn(error)
  }
}

let hkView = HKView(frame: view.bounds)
hkView.videoGravity = AVLayerVideoGravity.resizeAspectFill
hkView.attachStream(rtmpStream)

// add ViewController#view
view.addSubview(hkView)

connection.connect("srt://host:port?option=foo")
stream.publish()

Playback

let connection = SRTConnection()
let stream = SRTStream(connection: connection)

let hkView = MTHKView(frame: view.bounds)
hkView.videoGravity = AVLayerVideoGravity.resizeAspectFill
hkView.attachStream(rtmpStream)

// add ViewController#view
view.addSubview(hkView)

connection.connect("srt://host:port?option=foo")
stream.play()

๐Ÿ““ Settings

๐Ÿ“น AVCaptureSession

stream.frameRate = 30
stream.sessionPreset = AVCaptureSession.Preset.medium

// Do not call beginConfiguration() and commitConfiguration() internally within the scope of the method, as they are called internally.
stream.configuration { session in
  session.automaticallyConfiguresApplicationAudioSession = true
}

๐Ÿ”Š Audio

Capture

Specifies the capture capture settings.

let front = AVCaptureDevice.default(for: .audio)
stream.attachAudio(front, track: 0) { audioUnit, error in
}

AudioMixerSettings

If you want to mix multiple audio tracks, please enable the feature flag.

stream.isMultiTrackAudioMixingEnabled = true

When you specify the sampling rate, it will perform resampling. Additionally, in the case of multiple channels, downsampling can be applied.

// Setting the value to 0 will be the same as the value specified in mainTrack.
stream.audioMixerSettings = IOAudioMixerSettings(
  sampleRate: Float64 = 44100,
  channels: UInt32 = 0,
)

stream.audioMixerSettings.isMuted = false
stream.audioMixerSettings.mainTrack = 0
stream.audioMixerSettings.tracks = [
  0: .init(
    isMuted: Bool = false,
    downmix: Bool = true,
    channelMap: [Int]? = nil
  )
]

AudioCodecSettings

/// Specifies the bitRate of audio output.
stream.audioSettings.bitrate = 64 * 1000
/// Specifies the mixes the channels or not. Currently, it supports input sources with 4, 5, 6, and 8 channels.
stream.audioSettings.downmix = true
/// Specifies the map of the output to input channels.
 stream.audioSettings.channelMap: [Int]? = nil

๐ŸŽฅ Video

Capture

Specifies the video capture settings.

let front = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .front)
stream.attachCamera(front, track: 0) { videoUnit, error in
  videoUnit?.isVideoMirrored = true
  videoUnit?.preferredVideoStabilizationMode = .standard
  videoUnit?.colorFormat = kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange
}

VideoMixerSettings

/// Specifies the image rendering mode.
stream.videoMixerSettings.mode = .passthrough or .offscreen
/// Specifies the muted indicies whether freeze video signal or not.
stream.videoMixerSettings.isMuted = false
/// Specifies the main track number.
stream.videoMixerSettings.mainTrack = 0

VideoCodecSettings

stream.videoSettings = .init(
  videoSize: .init(width: 854, height: 480),
  profileLevel: kVTProfileLevel_H264_Baseline_3_1 as String,
  bitRate: 640 * 1000,
  maxKeyFrameIntervalDuration: 2,
  scalingMode: .trim,
  bitRateMode: .average,
  allowFrameReordering: nil,
  isHardwareEncoderEnabled: true
)

โบ๏ธ Recording

Internally, I am now handling data with more than 3 channels. If you encounter audio issues with IOStreamRecorder, it is recommended to set it back to a maximum of 2 channels when saving locally.

let channels = max(stream.audioInputFormats[0].channels ?? 1, 2)
stream.audioMixerSettings = .init(sampleRate: 0, channels: channels)
// Specifies the recording settings. 0" means the same of input.
var recorder = IOStreamRecorder()
stream.addObserver(recorder)

recorder.settings = [
  AVMediaType.audio: [
    AVFormatIDKey: Int(kAudioFormatMPEG4AAC),
    AVSampleRateKey: 0,
    AVNumberOfChannelsKey: 0,
    // AVEncoderBitRateKey: 128000,
  ],
  AVMediaType.video: [
    AVVideoCodecKey: AVVideoCodecH264,
    AVVideoHeightKey: 0,
    AVVideoWidthKey: 0,
    /*
    AVVideoCompressionPropertiesKey: [
      AVVideoMaxKeyFrameIntervalDurationKey: 2,
      AVVideoProfileLevelKey: AVVideoProfileLevelH264Baseline30,
      AVVideoAverageBitRateKey: 512000
    ]
    */
  ]
]

recorder.startRunning()
// recorder.stopRunning()

๐Ÿ“œ License

BSD-3-Clause