I want to preface that I have never used OME before so this could be a very dumb question, and I may have set it up wrong too. I am using the default configuration. I will post the Server.xml file below as well as the Docker run command I used to start OME.
I'm trying to stream AAC/H264 samples from a Node.js app to OME. My goal is to get a Low Latency HLS stream out of it. The best way I could figure it out was by manually creating the TS packets myself and then piping that data into FFmpeg which then sends it to the SRT port.
I keep my OME demo server as Docker Container on a remote ubuntu 20.04 machine I access over my VPN. My program doesn't crash and it doesn't give me any errors, but I have no way of knowing that OME is actually getting the live stream.
Here is my Server.xml file (same for both edge and origin conf):
<?xml version="1.0" encoding="UTF-8"?>
<Server version="8">
<Name>OvenMediaEngine</Name>
<!-- Host type (origin/edge) -->
<Type>origin</Type>
<!-- Specify IP address to bind (* means all IPs) -->
<IP>*</IP>
<PrivacyProtection>false</PrivacyProtection>
<!--
To get the public IP address(mapped address of stun) of the local server.
This is useful when OME cannot obtain a public IP from an interface, such as AWS or docker environment.
If this is successful, you can use ${PublicIP} in your settings.
-->
<StunServer>stun.l.google.com:19302</StunServer>
<Modules>
<!--
Currently OME only supports h2 like all browsers do. Therefore, HTTP/2 only works on TLS ports.
-->
<HTTP2>
<Enable>true</Enable>
</HTTP2>
<LLHLS>
<Enable>true</Enable>
</LLHLS>
<!-- P2P works only in WebRTC and is experiment feature -->
<P2P>
<!-- disabled by default -->
<Enable>false</Enable>
<MaxClientPeersPerHostPeer>2</MaxClientPeersPerHostPeer>
</P2P>
</Modules>
<!-- Settings for the ports to bind -->
<Bind>
<!-- Enable this configuration if you want to use API Server -->
<!--
<Managers>
<API>
<Port>8081</Port>
<TLSPort>8082</TLSPort>
<WorkerCount>1</WorkerCount>
</API>
</Managers>
-->
<Providers>
<!-- Pull providers -->
<RTSPC>
<WorkerCount>1</WorkerCount>
</RTSPC>
<OVT>
<WorkerCount>1</WorkerCount>
</OVT>
<!-- Push providers -->
<RTMP>
<Port>1935</Port>
<WorkerCount>1</WorkerCount>
</RTMP>
<SRT>
<Port>9999</Port>
<WorkerCount>1</WorkerCount>
</SRT>
<MPEGTS>
<!--
Listen on port 4000~4005 (<Port>4000-4004,4005/udp</Port>)
This is just a demonstration to show that you can configure the port in several ways
-->
<Port>4000/udp</Port>
</MPEGTS>
<WebRTC>
<Signalling>
<Port>3333</Port>
<TLSPort>3334</TLSPort>
<WorkerCount>1</WorkerCount>
</Signalling>
<IceCandidates>
<IceCandidate>*:10000/udp</IceCandidate>
<!--
If you want to stream WebRTC over TCP, specify IP:Port for TURN server.
This uses the TURN protocol, which delivers the stream from the built-in TURN server to the player's TURN client over TCP.
For detailed information, refer https://airensoft.gitbook.io/ovenmediaengine/streaming/webrtc-publishing#webrtc-over-tcp
-->
<TcpRelay>*:3478</TcpRelay>
<!-- TcpForce is an option to force the use of TCP rather than UDP in WebRTC streaming. (You can omit ?transport=tcp accordingly.) If <TcpRelay> is not set, playback may fail. -->
<TcpForce>true</TcpForce>
<TcpRelayWorkerCount>1</TcpRelayWorkerCount>
</IceCandidates>
</WebRTC>
</Providers>
<Publishers>
<OVT>
<Port>9000</Port>
<WorkerCount>1</WorkerCount>
</OVT>
<LLHLS>
<!--
OME only supports h2, so LLHLS works over HTTP/1.1 on non-TLS ports.
LLHLS works with higher performance over HTTP/2,
so it is recommended to use a TLS port.
-->
<Port>3333</Port>
<!-- If you want to use TLS, specify the TLS port -->
<TLSPort>3334</TLSPort>
<WorkerCount>1</WorkerCount>
</LLHLS>
<WebRTC>
<Signalling>
<Port>3333</Port>
<TLSPort>3334</TLSPort>
<WorkerCount>1</WorkerCount>
</Signalling>
<IceCandidates>
<IceCandidate>*:10000-10005/udp</IceCandidate>
<!--
If you want to stream WebRTC over TCP, specify IP:Port for TURN server.
This uses the TURN protocol, which delivers the stream from the built-in TURN server to the player's TURN client over TCP.
For detailed information, refer https://airensoft.gitbook.io/ovenmediaengine/streaming/webrtc-publishing#webrtc-over-tcp
-->
<TcpRelay>*:3478</TcpRelay>
<!-- TcpForce is an option to force the use of TCP rather than UDP in WebRTC streaming. (You can omit ?transport=tcp accordingly.) If <TcpRelay> is not set, playback may fail. -->
<TcpForce>true</TcpForce>
<TcpRelayWorkerCount>1</TcpRelayWorkerCount>
</IceCandidates>
</WebRTC>
</Publishers>
</Bind>
<!--
Enable this configuration if you want to use API Server
<AccessToken> is a token for authentication, and when you invoke the API, you must put "Basic base64encode(<AccessToken>)" in the "Authorization" header of HTTP request.
For example, if you set <AccessToken> to "ome-access-token", you must set "Basic b21lLWFjY2Vzcy10b2tlbg==" in the "Authorization" header.
-->
<!--
<Managers>
<Host>
<Names>
<Name>*</Name>
</Names>
<TLS>
<CertPath>path/to/file.crt</CertPath>
<KeyPath>path/to/file.key</KeyPath>
<ChainCertPath>path/to/file.crt</ChainCertPath>
</TLS>
</Host>
<API>
<AccessToken>ome-access-token</AccessToken>
<CrossDomains>
<Url>*.airensoft.com</Url>
<Url>http://*.sub-domain.airensoft.com</Url>
<Url>http?://airensoft.*</Url>
</CrossDomains>
</API>
</Managers>
-->
<VirtualHosts>
<!-- You can use wildcard like this to include multiple XMLs -->
<VirtualHost include="VHost*.xml" />
<VirtualHost>
<Name>default</Name>
<!--Distribution is a value that can be used when grouping the same vhost distributed across multiple servers. This value is output to the events log, so you can use it to aggregate statistics. -->
<Distribution>ovenmediaengine.com</Distribution>
<!-- Settings for multi ip/domain and TLS -->
<Host>
<Names>
<!-- Host names
<Name>stream1.airensoft.com</Name>
<Name>stream2.airensoft.com</Name>
<Name>*.sub.airensoft.com</Name>
<Name>192.168.0.1</Name>
-->
<Name>*</Name>
</Names>
<!--
<TLS>
<CertPath>path/to/file.crt</CertPath>
<KeyPath>path/to/file.key</KeyPath>
<ChainCertPath>path/to/file.crt</ChainCertPath>
</TLS>
-->
</Host>
<!--
Refer https://airensoft.gitbook.io/ovenmediaengine/signedpolicy
<SignedPolicy>
<PolicyQueryKeyName>policy</PolicyQueryKeyName>
<SignatureQueryKeyName>signature</SignatureQueryKeyName>
<SecretKey>aKq#1kj</SecretKey>
<Enables>
<Providers>rtmp,webrtc,srt</Providers>
<Publishers>webrtc,hls,llhls,dash,lldash</Publishers>
</Enables>
</SignedPolicy>
-->
<!--
<AdmissionWebhooks>
<ControlServerUrl></ControlServerUrl>
<SecretKey></SecretKey>
<Timeout>3000</Timeout>
<Enables>
<Providers>rtmp,webrtc,srt</Providers>
<Publishers>webrtc,hls,llhls,dash,lldash</Publishers>
</Enables>
</AdmissionWebhooks>
-->
<!-- <Origins>
<Properties>
<NoInputFailoverTimeout>3000</NoInputFailoverTimeout>
<UnusedStreamDeletionTimeout>60000</UnusedStreamDeletionTimeout>
</Properties>
<Origin>
<Location>/app/stream</Location>
<Pass>
<Scheme>ovt</Scheme>
<Urls><Url>origin.com:9000/app/stream_720p</Url></Urls>
</Pass>
<ForwardQueryParams>false</ForwardQueryParams>
</Origin>
<Origin>
<Location>/app/</Location>
<Pass>
<Scheme>ovt</Scheme>
<Urls><Url>origin.com:9000/app/</Url></Urls>
</Pass>
</Origin>
<Origin>
<Location>/edge/</Location>
<Pass>
<Scheme>ovt</Scheme>
<Urls><Url>origin.com:9000/app/</Url></Urls>
</Pass>
</Origin>
</Origins> -->
<!-- Settings for applications -->
<Applications>
<Application>
<Name>app</Name>
<!-- Application type (live/vod) -->
<Type>live</Type>
<OutputProfiles>
<!-- Enable this configuration if you want to hardware acceleration using GPU -->
<HardwareAcceleration>false</HardwareAcceleration>
<OutputProfile>
<Name>bypass_stream</Name>
<OutputStreamName>${OriginStreamName}</OutputStreamName>
<Encodes>
<Audio>
<Bypass>true</Bypass>
</Audio>
<Video>
<Bypass>true</Bypass>
</Video>
<Audio>
<Codec>opus</Codec>
<Bitrate>128000</Bitrate>
<Samplerate>48000</Samplerate>
<Channel>2</Channel>
</Audio>
<!--
<Video>
<Codec>vp8</Codec>
<Bitrate>1024000</Bitrate>
<Framerate>30</Framerate>
<Width>1280</Width>
<Height>720</Height>
<Preset>faster</Preset>
</Video>
-->
</Encodes>
</OutputProfile>
</OutputProfiles>
<Providers>
<OVT />
<WebRTC />
<RTMP />
<SRT />
<MPEGTS>
<StreamMap>
<!--
Set the stream name of the client connected to the port to "stream_${Port}"
For example, if a client connects to port 4000, OME creates a "stream_4000" stream
<Stream>
<Name>stream_${Port}</Name>
<Port>4000,4001-4004</Port>
</Stream>
<Stream>
<Name>stream_4005</Name>
<Port>4005</Port>
</Stream>
-->
<Stream>
<Name>stream_${Port}</Name>
<Port>4000</Port>
</Stream>
</StreamMap>
</MPEGTS>
<RTSPPull />
<WebRTC>
<Timeout>30000</Timeout>
</WebRTC>
</Providers>
<Publishers>
<AppWorkerCount>1</AppWorkerCount>
<StreamWorkerCount>8</StreamWorkerCount>
<OVT />
<WebRTC>
<Timeout>30000</Timeout>
<Rtx>false</Rtx>
<Ulpfec>false</Ulpfec>
<JitterBuffer>false</JitterBuffer>
</WebRTC>
<LLHLS>
<ChunkDuration>0.2</ChunkDuration>
<SegmentDuration>6</SegmentDuration>
<SegmentCount>10</SegmentCount>
<CrossDomains>
<Url>*</Url>
</CrossDomains>
</LLHLS>
</Publishers>
</Application>
</Applications>
</VirtualHost>
</VirtualHosts>
</Server>
I would appreciate any help. I've been down the depths of live streaming for 6 months now and I think this might be the cleanest solution I could make to achieve LL-HLS from raw audio/video samples. (Every piece of documentation uses a file like .mp4 as input or RTSP server, etc. I require a solution that works explicitly on just raw AAC/H264 audio video samples.)
I also apologize in advance if this question should be posted elsewhere.
I want to preface that I have never used OME before so this could be a very dumb question, and I may have set it up wrong too. I am using the default configuration. I will post the Server.xml file below as well as the Docker run command I used to start OME.
I'm trying to stream AAC/H264 samples from a Node.js app to OME. My goal is to get a Low Latency HLS stream out of it. The best way I could figure it out was by manually creating the TS packets myself and then piping that data into FFmpeg which then sends it to the SRT port.
I keep my OME demo server as Docker Container on a remote ubuntu 20.04 machine I access over my VPN. My program doesn't crash and it doesn't give me any errors, but I have no way of knowing that OME is actually getting the live stream.
I've tried navigating to https://{ip}:3334/app/test/llhls.m3u8 but I don't think I have my certs set up right (I use the Docker container
jwilder/nginx-proxy
with the Docker containernginxproxy/acme-companion
) so I tried navigating to http://{ip}:3334/app/test/ts:playlist.m3u8 instead.Here is my Server.xml file (same for both edge and origin conf):
Here is my docker run command (shell script):
Here is my ffmpeg command (spawned and stdin piped from Node.js):
I would appreciate any help. I've been down the depths of live streaming for 6 months now and I think this might be the cleanest solution I could make to achieve LL-HLS from raw audio/video samples. (Every piece of documentation uses a file like .mp4 as input or RTSP server, etc. I require a solution that works explicitly on just raw AAC/H264 audio video samples.)
I also apologize in advance if this question should be posted elsewhere.