nebulabroadcast / nebula-tutorial

The best starting point for anyone looking to familiarize themselves with Nebula
6 stars 3 forks source link

Error 500 from /api/playout: Unable to cue Offline playout file. #5

Open LightRO opened 5 months ago

LightRO commented 5 months ago

Hello everyone, the tutorial works really well and nebula was installed quite quickly. However, I am currently failing with the casparcg instructions. I have done everything as described in the tutorial, but it does not work. This means that when I double-click a video in Firefly under Rundown, the error occurs:

Error 500 from http://192.168.178.56/api/playout Unable to cue Offline playout file.

Nebula Log:

09d8d1bd7066_nebula-tutorial_backend_1 | DEBUG    urllib3.connecti Starting new HTTP connection (1): worker:42101
3cd7b23594c2_nebula-tutorial_worker_1 | ERROR    play1
3cd7b23594c2_nebula-tutorial_worker_1 |
3cd7b23594c2_nebula-tutorial_worker_1 | Traceback (most recent call last):
3cd7b23594c2_nebula-tutorial_worker_1 |       File "/opt/nebula/services/play/request_handler.py", line 57, in do_POST
3cd7b23594c2_nebula-tutorial_worker_1 |         result = self.server.methods[method](**postvars)
3cd7b23594c2_nebula-tutorial_worker_1 |                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
3cd7b23594c2_nebula-tutorial_worker_1 |       File "/opt/nebula/services/play/play.py", line 168, in cue
3cd7b23594c2_nebula-tutorial_worker_1 |         raise AssertionError(f"Unable to cue {state} playout file")
3cd7b23594c2_nebula-tutorial_worker_1 |     AssertionError: Unable to cue OFFLINE playout file
3cd7b23594c2_nebula-tutorial_worker_1 |
09d8d1bd7066_nebula-tutorial_backend_1 | DEBUG    urllib3.connecti http://worker:42101 "POST /cue HTTP/1.1" 500 None
09d8d1bd7066_nebula-tutorial_backend_1 | ERROR    nebula           EXCEPTION: 500 Unable to cue OFFLINE playout file

services.py

from nebula.settings.models import ServiceSettings

watch = """
<service>
    <folder id_storage="1" id_folder="1" path="media.dir/movies"></folder>
    <folder id_storage="1" id_folder="3" path="media.dir/stories"></folder>
    <folder id_storage="1" id_folder="4" path="media.dir/songs"></folder>
    <folder id_storage="1" id_folder="5" path="media.dir/fill"></folder>
    <folder id_storage="1" id_folder="6" path="media.dir/trailers"></folder>
    <folder id_storage="1" id_folder="7" path="media.dir/jingles"></folder>
    <folder id_storage="1" id_folder="8" path="media.dir/graphics"></folder>
</service>
"""

PLAY1 = "<settings><id_channel>1</id_channel></settings>"
PLAY2 = "<settings><id_channel>2</id_channel></settings>"

SERVICES = [
    ServiceSettings(id=1, type="broker", name="broker", host="worker"),
    ServiceSettings(id=2, type="watch", name="watch", host="worker", settings=watch),
    ServiceSettings(id=3, type="meta", name="meta", host="worker", loop_delay=2),
    ServiceSettings(id=4, type="conv", name="conv", host="worker"),
    ServiceSettings(id=11, type="play", name="play1", host="worker", settings=PLAY1),
    ServiceSettings(id=12, type="play", name="play2", host="worker", settings=PLAY2),
    ServiceSettings(id=14, type="psm", name="psm", host="worker"),

]

channel.py:

from nebula.settings.models import PlayoutChannelSettings, AcceptModel

scheduler_accepts = AcceptModel(folders=[1, 2])
rundown_accepts = AcceptModel(folders=[1, 3, 4, 5, 6, 7, 8, 9, 10])

channel1 = PlayoutChannelSettings(
    id=1,
    name="Channel 1",
    fps=25.0,
    plugins=[],
    solvers=[],
    day_start=(7, 0),
    scheduler_accepts=scheduler_accepts,
    rundown_accepts=rundown_accepts,
    rundown_columns=[],
    send_action=2,
    engine="casparcg",
    allow_remote=False,
    controller_host="worker",
    controller_port=42101,
    playout_storage=3,
    playout_dir="media",
    playout_container="mxf",
    config={
        "caspar_host": "192.168.178.59",
        "caspar_port": 5250,
        "caspar_osc_port": 6251,
        "caspar_channel": 1,
        "caspar_feed_layer": 10,
     },
)

# Configure second channel similarly

channel2 = PlayoutChannelSettings(
    id=2,
    name="Channel 2",
    fps=25.0,
    plugins=[],
    solvers=[],
    day_start=(7, 0),
    scheduler_accepts=scheduler_accepts,
    rundown_accepts=rundown_accepts,
    rundown_columns=[],
    send_action=2,
    engine="casparcg",
    allow_remote=False,
    controller_host="worker",
    controller_port=42102,
    playout_storage=3,
    playout_dir="media",
    playout_container="mxf",
    config={
        "caspar_host": "192.168.178.59",
        "caspar_port": 5250,
        "caspar_osc_port": 6252,
        "caspar_channel": 2,
        "caspar_feed_layer": 10,
    },
)

CHANNELS = [channel1, channel2]

in the actions folder I have created a new xml called playout.xml with the following content

<?xml version="1.0" encoding="UTF-8"?>
<settings>
    <allow_if>True</allow_if>
    <task mode="ffmpeg">
        <param name="filter:a">"loudnorm=I=-23"</param>
        <param name="ar">48000</param>
        <param name="c:v">"copy"</param>
        <param name="c:a">"pcm_s16le"</param>
        <output storage="asset.get_playout_storage(1)" direct="1"><![CDATA[asset.get_playout_path(1)]]></output>
    </task>
</settings>

I have now installed casparcg on my Windows computer (same network) and the configuration looks like this:

<?xml version="1.0" encoding="utf-8"?>

<configuration>
    <paths>
        <media-path>media/</media-path>
        <log-path disable="false">log/</log-path>
        <data-path>data/</data-path>
        <template-path>template/</template-path>
    </paths>
    <lock-clear-phrase>secret</lock-clear-phrase>
    <channels>
        <channel>
            <video-mode>720p5000</video-mode>
            <consumers>
                <screen />
                <system-audio />
            </consumers>
        </channel>
    </channels>
    <controllers>
        <tcp>
            <port>5250</port>
            <protocol>AMCP</protocol>
        </tcp>
    </controllers>
    <amcp>
        <media-server>
            <host>localhost</host>
            <port>8000</port>
        </media-server>
    </amcp>
</configuration>
<osc>
  <predefined-clients>
    <predefined-client>
      <address>192.168.178.56</address>
      <port>6251</port>
    </predefined-client>
    <predefined-client>
      <address>192.168.178.56</address>
      <port>6252</port>
    </predefined-client>
  </predefined-clients>
</osc>

The output of Casparcg is as follows:

Type "q" to close application.
Logging [info] or higher severity to log/

[2024-05-09 20:35:17.394] [info]    ############################################################################
[2024-05-09 20:35:17.394] [info]    CasparCG Server is distributed by the Swedish Broadcasting Corporation (SVT)
[2024-05-09 20:35:17.394] [info]    under the GNU General Public License GPLv3 or higher.
[2024-05-09 20:35:17.394] [info]    Please see LICENSE.TXT for details.
[2024-05-09 20:35:17.394] [info]    http://www.casparcg.com/
[2024-05-09 20:35:17.394] [info]    ############################################################################
[2024-05-09 20:35:17.394] [info]    Starting CasparCG Video and Graphics Playout Server 2.4.0 1e25c7a Stable
[2024-05-09 20:35:17.396] [info]    "D:/casparcg-server-v2.4.0-stable-windows\casparcg.config":
[2024-05-09 20:35:17.396] [info]    -----------------------------------------
[2024-05-09 20:35:17.396] [info]    <?xml version="1.0" encoding="utf-8"?>
[2024-05-09 20:35:17.396] [info]    <configuration>
[2024-05-09 20:35:17.396] [info]       <paths>
[2024-05-09 20:35:17.396] [info]          <media-path>media/</media-path>
[2024-05-09 20:35:17.396] [info]          <log-path disable="false">log/</log-path>
[2024-05-09 20:35:17.396] [info]          <data-path>data/</data-path>
[2024-05-09 20:35:17.396] [info]          <template-path>template/</template-path>
[2024-05-09 20:35:17.396] [info]       </paths>
[2024-05-09 20:35:17.396] [info]       <lock-clear-phrase>secret</lock-clear-phrase>
[2024-05-09 20:35:17.396] [info]       <channels>
[2024-05-09 20:35:17.396] [info]          <channel>
[2024-05-09 20:35:17.396] [info]             <video-mode>720p5000</video-mode>
[2024-05-09 20:35:17.396] [info]             <consumers>
[2024-05-09 20:35:17.396] [info]                <screen/>
[2024-05-09 20:35:17.396] [info]                <system-audio/>
[2024-05-09 20:35:17.396] [info]             </consumers>
[2024-05-09 20:35:17.396] [info]          </channel>
[2024-05-09 20:35:17.396] [info]       </channels>
[2024-05-09 20:35:17.396] [info]       <controllers>
[2024-05-09 20:35:17.396] [info]          <tcp>
[2024-05-09 20:35:17.396] [info]             <port>5250</port>
[2024-05-09 20:35:17.396] [info]             <protocol>AMCP</protocol>
[2024-05-09 20:35:17.396] [info]          </tcp>
[2024-05-09 20:35:17.396] [info]       </controllers>
[2024-05-09 20:35:17.396] [info]       <amcp>
[2024-05-09 20:35:17.396] [info]          <media-server>
[2024-05-09 20:35:17.396] [info]             <host>localhost</host>
[2024-05-09 20:35:17.396] [info]             <port>8000</port>
[2024-05-09 20:35:17.396] [info]          </media-server>
[2024-05-09 20:35:17.396] [info]       </amcp>
[2024-05-09 20:35:17.396] [info]    </configuration>
[2024-05-09 20:35:17.396] [info]    <osc>
[2024-05-09 20:35:17.396] [info]       <predefined-clients>
[2024-05-09 20:35:17.396] [info]          <predefined-client>
[2024-05-09 20:35:17.396] [info]             <address>192.168.178.56</address>
[2024-05-09 20:35:17.396] [info]             <port>6251</port>
[2024-05-09 20:35:17.396] [info]          </predefined-client>
[2024-05-09 20:35:17.396] [info]          <predefined-client>
[2024-05-09 20:35:17.396] [info]             <address>192.168.178.56</address>
[2024-05-09 20:35:17.396] [info]             <port>6252</port>
[2024-05-09 20:35:17.396] [info]          </predefined-client>
[2024-05-09 20:35:17.396] [info]       </predefined-clients>
[2024-05-09 20:35:17.396] [info]    </osc>
[2024-05-09 20:35:17.396] [info]    -----------------------------------------
[2024-05-09 20:35:17.396] [info]    Initialized video modes.
[2024-05-09 20:35:17.938] [info]    Initializing OpenGL Device.
[2024-05-09 20:35:17.943] [info]    Initialized OpenGL 4.5.0 NVIDIA 552.22 NVIDIA Corporation
[2024-05-09 20:35:17.991] [info]    Initialized OpenGL Accelerated GPU Image Mixer for channel 1
[2024-05-09 20:35:17.991] [info]    video_channel[1|720p5000] Successfully Initialized.
[2024-05-09 20:35:17.991] [info]    Initialized channels.
[2024-05-09 20:35:17.991] [info]    Initialized command repository.
[2024-05-09 20:35:17.991] [info]    Initialized image module.
[2024-05-09 20:35:17.991] [info]    Initialized ffmpeg module.
[2024-05-09 20:35:17.991] [info]    Initialized oal module.
[2024-05-09 20:35:17.991] [info]    Initialized decklink module.
[2024-05-09 20:35:17.991] [info]    Initialized screen module.
[2024-05-09 20:35:17.991] [info]    Initialized newtek module.
[2024-05-09 20:35:17.991] [info]    Initialized artnet module.
[0509/203518.028:WARNING:policy_logger.cc(148)] :components\enterprise\browser\controller\chrome_browser_cloud_management_controller.cc(88) Could not create policy manager as CBCM is not enabled.
[2024-05-09 2[0509/203518.071:INFO:component_installer.cc(297)] Failed to verify installation for Widevine Content Decryption Module (C:\Users\rosch\AppData\Local\CEF\User Data\WidevineCdm\4.10.2557.0).: Das System kann den angegebenen Pfad nicht finden. (0x3)
0:35:18.071] [info]    Initialized html module.
[2024-05-09 20:35:18.071] [info]    Flash support is disabled
[2024-05-09 20:35:18.071] [info]    Initialized flash module.
[2024-05-09 20:35:18.084] [info]    Initialized bluefish module.
[2024-05-09 20:35:18.084] [info]    Initialized modules.
[2024-05-09 20:35:18.084] [info]    Screen consumer [1|720p5000] Initialized.
[2024-05-09 20:35:18.107] [info]    oal[1|720p5000] Initialized.
[2024-05-09 20:35:18.107] [info]    Initialized startup producers.
[2024-05-09 20:35:18.107] [info]    Initialized controllers.
[2024-05-09 20:35:18.107] [info]    Initialized osc.
[2024-05-09 20:50:51.908] [info]    async_event_server[:5250] Accepted connection from 192.168.178.56 (1 connections).
[2024-05-09 20:50:52.205] [info]    async_event_server[:5250] Accepted connection from 192.168.178.56 (2 connections).
[2024-05-09 20:55:22.295] [info]    async_event_server[:5250] Client 192.168.178.56 disconnected (1 connections).
[2024-05-09 20:55:22.304] [info]    async_event_server[:5250] Client 192.168.178.56 disconnected (0 connections).
[2024-05-09 20:55:33.448] [info]    async_event_server[:5250] Accepted connection from 192.168.178.56 (1 connections).
[2024-05-09 20:55:33.513] [info]    async_event_server[:5250] Accepted connection from 192.168.178.56 (2 connections).
martastain commented 5 months ago

Hello. I assume in the rundown you see your clip marked as "REMOTE"?

image

In that case the problem is the asset is not properly copied to the playout storage. There may be several reasons for that:

1. Storage

Playout storage is set to id 3, could you confirm that when you start the worker, there's a log message similar to this?

nebula-worker-1    | INFO     dispatch         storage 3 (playout) is not mounted. Mounting...
nebula-worker-1    | INFO     dispatch         storage 3 (playout) mounted successfully

Based on your configuration the shared storage is your caspar directory. So inside it, you should see a fie ".nebula_root" - that's a file nebula uses to determine whether the storage is writable.

2. Playout action

The action XML contains <allow_if>True</allow_if> so that action can be started manually and you should be able to see it when you right-click the asset and select "Send to..." from the context menu (it should work both in web interface and firefly browser as well as the rundown)

If you don't see "playout" action listed in the dialog, make sure the action is included in settings/actions.py If you don't want to include the XML directly to the python file, you may use a simple wrapper:

from nebula.settings.models import ActionSettings

def load_cfg(filename: str) -> str:
    return open(f"/settings/actions/{filename}.xml").read()

ACTIONS = [
    ActionSettings(
        id=1,
        name="proxy",
        type="conv",
        settings=load_cfg("proxy"),
    ),
    ActionSettings(
        id=2,
        name="playout",
        type="conv",
        settings=load_cfg("playout"),
    ),
]

3. Target location

as soon you are able to run the "send to playout" action manually, check the "Jobs" page or worker logs for errors. If everything works, in the media directory on the playout server you should see a file {your_site_name}-{asset_id}.mxf

4. PSM

If the file is there, but nebula still marks it as offline, it is possible the PSM storage didn't catch it properly. Keep in mind that PSM only handle files scheduled between "now" and "now + 24hour". If your event is outside that range you may need to change its start time (that is something I should really improve). It may also be a reason why the conversion job didn't started automatically.

I hope that helps. Please let me know if any of the solution worked for you, any PR for improving the tutorials in this repo would be highly appreciated.

LightRO commented 5 months ago

It could be the storage problem. I have currently created a file share on the Windows computer. On the Nebula server I have created a folder /broadcast and linked it to the Windows share. This also works.

Unfortunately, I can't really follow the storage point, but it's already late at our place :D. Therefore I have attached the entire log from the console:

root@tawoca-media001:~/nebula-tutorial# docker-compose up
WARNING: Found orphan containers (08ac33c0c3d4_nebula-tutorial_play_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
Starting nebula-tutorial_postgres_1              ... done
Starting nebula-tutorial_redis_1                 ... done
Starting 6782018034cb_nebula-tutorial_channels_1 ... done
Starting 09d8d1bd7066_nebula-tutorial_backend_1  ... done
Starting 3cd7b23594c2_nebula-tutorial_worker_1   ... done
Attaching to nebula-tutorial_postgres_1, nebula-tutorial_redis_1, 6782018034cb_nebula-tutorial_channels_1, 09d8d1bd7066_nebula-tutorial_backend_1, 3cd7b23594c2_nebula-tutorial_worker_1
09d8d1bd7066_nebula-tutorial_backend_1 |
09d8d1bd7066_nebula-tutorial_backend_1 | Starting the server...
09d8d1bd7066_nebula-tutorial_backend_1 |
09d8d1bd7066_nebula-tutorial_backend_1 | DEBUG    asyncio          Using selector: EpollSelector
6782018034cb_nebula-tutorial_channels_1 | INFO     dispatch         Starting StorageMonitor
6782018034cb_nebula-tutorial_channels_1 | DEBUG    dispatch         Connecting messaging to redis://redis
6782018034cb_nebula-tutorial_channels_1 | INFO     dispatch         Starting ServiceMonitor
postgres_1  |
postgres_1  | PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres_1  |
postgres_1  | 2024-05-09 20:56:38.448 UTC [1] LOG:  starting PostgreSQL 16.2 (Debian 16.2-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit
postgres_1  | 2024-05-09 20:56:38.449 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
postgres_1  | 2024-05-09 20:56:38.449 UTC [1] LOG:  listening on IPv6 address "::", port 5432
postgres_1  | 2024-05-09 20:56:38.452 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_1  | 2024-05-09 20:56:38.458 UTC [29] LOG:  database system was shut down at 2024-05-09 20:56:11 UTC
postgres_1  | 2024-05-09 20:56:38.478 UTC [1] LOG:  database system is ready to accept connections
redis_1     | 1:C 09 May 2024 20:56:38.524 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis_1     | 1:C 09 May 2024 20:56:38.528 * oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1     | 1:C 09 May 2024 20:56:38.528 * Redis version=7.2.4, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1     | 1:C 09 May 2024 20:56:38.528 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_1     | 1:M 09 May 2024 20:56:38.529 * monotonic clock: POSIX clock_gettime
redis_1     | 1:M 09 May 2024 20:56:38.531 * Running mode=standalone, port=6379.
redis_1     | 1:M 09 May 2024 20:56:38.532 * Server initialized
redis_1     | 1:M 09 May 2024 20:56:38.532 * Loading RDB produced by version 7.2.4
redis_1     | 1:M 09 May 2024 20:56:38.532 * RDB age 27 seconds
redis_1     | 1:M 09 May 2024 20:56:38.532 * RDB memory usage when created 0.95 Mb
redis_1     | 1:M 09 May 2024 20:56:38.532 * Done loading RDB, keys loaded: 5, keys expired: 0.
redis_1     | 1:M 09 May 2024 20:56:38.532 * DB loaded from disk: 0.000 seconds
redis_1     | 1:M 09 May 2024 20:56:38.532 * Ready to accept connections tcp
09d8d1bd7066_nebula-tutorial_backend_1 | INFO     setup            Connected to the database
09d8d1bd7066_nebula-tutorial_backend_1 | INFO     setup            Creating database schema
09d8d1bd7066_nebula-tutorial_backend_1 | INFO     setup            Found overrides for actions
09d8d1bd7066_nebula-tutorial_backend_1 | INFO     setup            Found overrides for channels
09d8d1bd7066_nebula-tutorial_backend_1 | INFO     setup            Found overrides for services
09d8d1bd7066_nebula-tutorial_backend_1 | INFO     setup            Applying system settings
09d8d1bd7066_nebula-tutorial_backend_1 | DEBUG    httpx            load_ssl_context verify=True cert=None trust_env=True http2=False
09d8d1bd7066_nebula-tutorial_backend_1 | DEBUG    httpx            load_verify_locations cafile='/usr/local/lib/python3.12/site-packages/certifi/cacert.pem'
09d8d1bd7066_nebula-tutorial_backend_1 | DEBUG    httpcore.connect connect_tcp.started host='cs.nbla.xyz' port=443 local_address=None timeout=5.0 socket_options=None
3cd7b23594c2_nebula-tutorial_worker_1 | INFO     dispatch         Starting StorageMonitor
3cd7b23594c2_nebula-tutorial_worker_1 | DEBUG    dispatch         Connecting messaging to redis://redis
09d8d1bd7066_nebula-tutorial_backend_1 | DEBUG    httpcore.connect connect_tcp.complete return_value=<httpcore._backends.anyio.AnyIOStream object at 0x7f5893f763c0>
09d8d1bd7066_nebula-tutorial_backend_1 | DEBUG    httpcore.connect start_tls.started ssl_context=<ssl.SSLContext object at 0x7f58934581d0> server_hostname='cs.nbla.xyz' timeout=5.0
09d8d1bd7066_nebula-tutorial_backend_1 | DEBUG    httpcore.connect start_tls.complete return_value=<httpcore._backends.anyio.AnyIOStream object at 0x7f5893defec0>
09d8d1bd7066_nebula-tutorial_backend_1 | DEBUG    httpcore.http11  send_request_headers.started request=<Request [b'GET']>
09d8d1bd7066_nebula-tutorial_backend_1 | DEBUG    httpcore.http11  send_request_headers.complete
09d8d1bd7066_nebula-tutorial_backend_1 | DEBUG    httpcore.http11  send_request_body.started request=<Request [b'GET']>
09d8d1bd7066_nebula-tutorial_backend_1 | DEBUG    httpcore.http11  send_request_body.complete
09d8d1bd7066_nebula-tutorial_backend_1 | DEBUG    httpcore.http11  receive_response_headers.started request=<Request [b'GET']>
09d8d1bd7066_nebula-tutorial_backend_1 | DEBUG    httpcore.http11  receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Accept-Ranges', b'bytes'), (b'Alt-Svc', b'h3=":443"; ma=2592000'), (b'Content-Length', b'440440'), (b'Content-Type', b'application/json'), (b'Etag', b'"s6zhnr9fug"'), (b'Last-Modified', b'Tue, 09 Jan 2024 07:47:51 GMT'), (b'Date', b'Thu, 09 May 2024 20:56:41 GMT')])
09d8d1bd7066_nebula-tutorial_backend_1 | INFO     httpx            HTTP Request: GET https://cs.nbla.xyz/dump "HTTP/1.1 200 OK"
09d8d1bd7066_nebula-tutorial_backend_1 | DEBUG    httpcore.http11  receive_response_body.started request=<Request [b'GET']>
09d8d1bd7066_nebula-tutorial_backend_1 | DEBUG    httpcore.http11  receive_response_body.complete
09d8d1bd7066_nebula-tutorial_backend_1 | DEBUG    httpcore.http11  response_closed.started
09d8d1bd7066_nebula-tutorial_backend_1 | DEBUG    httpcore.http11  response_closed.complete
09d8d1bd7066_nebula-tutorial_backend_1 | DEBUG    httpcore.connect close.started
09d8d1bd7066_nebula-tutorial_backend_1 | DEBUG    httpcore.connect close.complete
3cd7b23594c2_nebula-tutorial_worker_1 | INFO     dispatch         Starting ServiceMonitor
3cd7b23594c2_nebula-tutorial_worker_1 | DEBUG    dispatch         AutoStarting service ID 1 (broker)
3cd7b23594c2_nebula-tutorial_worker_1 | INFO     dispatch         Starting service ID 1 (broker)
3cd7b23594c2_nebula-tutorial_worker_1 | DEBUG    dispatch         AutoStarting service ID 2 (watch)
3cd7b23594c2_nebula-tutorial_worker_1 | INFO     dispatch         Starting service ID 2 (watch)
3cd7b23594c2_nebula-tutorial_worker_1 | DEBUG    dispatch         AutoStarting service ID 3 (meta)
3cd7b23594c2_nebula-tutorial_worker_1 | INFO     dispatch         Starting service ID 3 (meta)
3cd7b23594c2_nebula-tutorial_worker_1 | DEBUG    dispatch         AutoStarting service ID 4 (conv)
3cd7b23594c2_nebula-tutorial_worker_1 | INFO     dispatch         Starting service ID 4 (conv)
3cd7b23594c2_nebula-tutorial_worker_1 | DEBUG    dispatch         AutoStarting service ID 11 (play1)
3cd7b23594c2_nebula-tutorial_worker_1 | INFO     dispatch         Starting service ID 11 (play1)
3cd7b23594c2_nebula-tutorial_worker_1 | DEBUG    dispatch         AutoStarting service ID 12 (play2)
3cd7b23594c2_nebula-tutorial_worker_1 | INFO     dispatch         Starting service ID 12 (play2)
3cd7b23594c2_nebula-tutorial_worker_1 | DEBUG    dispatch         AutoStarting service ID 14 (psm)
3cd7b23594c2_nebula-tutorial_worker_1 | INFO     dispatch         Starting service ID 14 (psm)
09d8d1bd7066_nebula-tutorial_backend_1 |
09d8d1bd7066_nebula-tutorial_backend_1 | Create a new administrator
09d8d1bd7066_nebula-tutorial_backend_1 |
09d8d1bd7066_nebula-tutorial_backend_1 | This will create a new user with admin rights in the database
09d8d1bd7066_nebula-tutorial_backend_1 | Hit Ctrl-C to cancel
09d8d1bd7066_nebula-tutorial_backend_1 |
09d8d1bd7066_nebula-tutorial_backend_1 |
09d8d1bd7066_nebula-tutorial_backend_1 | Change user password
09d8d1bd7066_nebula-tutorial_backend_1 |
09d8d1bd7066_nebula-tutorial_backend_1 | This will change the password of an existing user
09d8d1bd7066_nebula-tutorial_backend_1 | Hit Ctrl-C to cancel
09d8d1bd7066_nebula-tutorial_backend_1 |
09d8d1bd7066_nebula-tutorial_backend_1 |
09d8d1bd7066_nebula-tutorial_backend_1 | Create a new administrator
09d8d1bd7066_nebula-tutorial_backend_1 |
09d8d1bd7066_nebula-tutorial_backend_1 | This will create a new user with admin rights in the database
09d8d1bd7066_nebula-tutorial_backend_1 | Hit Ctrl-C to cancel
09d8d1bd7066_nebula-tutorial_backend_1 |
09d8d1bd7066_nebula-tutorial_backend_1 | INFO     nebula           Server started
3cd7b23594c2_nebula-tutorial_worker_1 | DEBUG    watch            Initializing service ID 2
3cd7b23594c2_nebula-tutorial_worker_1 | INFO     watch            Service started
3cd7b23594c2_nebula-tutorial_worker_1 | DEBUG    watch            Connecting messaging to redis://redis
3cd7b23594c2_nebula-tutorial_worker_1 | DEBUG    meta             Initializing service ID 3
3cd7b23594c2_nebula-tutorial_worker_1 | DEBUG    meta             Following actions will be restarted on source update: all
3cd7b23594c2_nebula-tutorial_worker_1 | DEBUG    meta             Connecting messaging to redis://redis
3cd7b23594c2_nebula-tutorial_worker_1 | INFO     meta             Service started
3cd7b23594c2_nebula-tutorial_worker_1 | DEBUG    play1            Initializing service ID 11
3cd7b23594c2_nebula-tutorial_worker_1 | DEBUG    play1            Connecting messaging to redis://redis
3cd7b23594c2_nebula-tutorial_worker_1 | INFO     play1            Starting OSC listener on port 6251
3cd7b23594c2_nebula-tutorial_worker_1 | INFO     play1            Using port 42101 for the HTTP interface.
3cd7b23594c2_nebula-tutorial_worker_1 | DEBUG    psm              Initializing service ID 14
3cd7b23594c2_nebula-tutorial_worker_1 | WARNING  play1            Playout plugins directory  does not exist
3cd7b23594c2_nebula-tutorial_worker_1 | DEBUG    psm              Connecting messaging to redis://redis
3cd7b23594c2_nebula-tutorial_worker_1 | DEBUG    broker           Initializing service ID 1
3cd7b23594c2_nebula-tutorial_worker_1 | INFO     psm              Service started
3cd7b23594c2_nebula-tutorial_worker_1 | DEBUG    broker           Connecting messaging to redis://redis
3cd7b23594c2_nebula-tutorial_worker_1 | INFO     play1            Service started
3cd7b23594c2_nebula-tutorial_worker_1 | ERROR    psm              storage 3 (Unknown) is not available
3cd7b23594c2_nebula-tutorial_worker_1 | INFO     broker           Service started
3cd7b23594c2_nebula-tutorial_worker_1 | DEBUG    play2            Initializing service ID 12
3cd7b23594c2_nebula-tutorial_worker_1 | DEBUG    play2            Connecting messaging to redis://redis
3cd7b23594c2_nebula-tutorial_worker_1 | INFO     play2            Starting OSC listener on port 6252
3cd7b23594c2_nebula-tutorial_worker_1 | INFO     play2            Using port 42102 for the HTTP interface.
3cd7b23594c2_nebula-tutorial_worker_1 | WARNING  play2            Playout plugins directory  does not exist
3cd7b23594c2_nebula-tutorial_worker_1 | DEBUG    conv             Initializing service ID 4
3cd7b23594c2_nebula-tutorial_worker_1 | DEBUG    conv             Registering action proxy
3cd7b23594c2_nebula-tutorial_worker_1 | DEBUG    conv             Connecting messaging to redis://redis
3cd7b23594c2_nebula-tutorial_worker_1 | DEBUG    conv             Registering action playout
3cd7b23594c2_nebula-tutorial_worker_1 | INFO     play2            Service started
3cd7b23594c2_nebula-tutorial_worker_1 | INFO     conv             Service started
3cd7b23594c2_nebula-tutorial_worker_1 | WARNING  play1            Waiting for OSC
3cd7b23594c2_nebula-tutorial_worker_1 | WARNING  play2            Waiting for OSC
3cd7b23594c2_nebula-tutorial_worker_1 | WARNING  play1            Waiting for OSC
3cd7b23594c2_nebula-tutorial_worker_1 | WARNING  play2            Waiting for OSC

When I try to send the video to playout via Send to I get the following error in the console:

3cd7b23594c2_nebula-tutorial_worker_1 | INFO     conv             Got job ID:2 [playout@asset ID:1 (OpenerFull)]
3cd7b23594c2_nebula-tutorial_worker_1 | DEBUG    conv             Configuring task 1 of 1
3cd7b23594c2_nebula-tutorial_worker_1 | ERROR    conv             job ID:2 [playout@asset ID:1 (OpenerFull)]: Failed to configure task 1: Target storage is not writable
3cd7b23594c2_nebula-tutorial_worker_1 | ERROR    conv
3cd7b23594c2_nebula-tutorial_worker_1 |
3cd7b23594c2_nebula-tutorial_worker_1 | Traceback (most recent call last):
3cd7b23594c2_nebula-tutorial_worker_1 |       File "/opt/nebula/services/conv/conv.py", line 114, in on_main
3cd7b23594c2_nebula-tutorial_worker_1 |         self.encoder.configure()
3cd7b23594c2_nebula-tutorial_worker_1 |       File "/opt/nebula/services/conv/ffmpeg.py", line 61, in configure
3cd7b23594c2_nebula-tutorial_worker_1 |         raise ConversionError("Target storage is not writable")
3cd7b23594c2_nebula-tutorial_worker_1 |     services.conv.common.ConversionError: Target storage is not writable
3cd7b23594c2_nebula-tutorial_worker_1 |
3cd7b23594c2_nebula-tutorial_worker_1 | WARNING  play1            Waiting for OSC
3cd7b23594c2_nebula-tutorial_worker_1 | WARNING  play2            Waiting for OSC
martastain commented 5 months ago

This is what i usually do: Assuming the playout drive is d:\, share the entire drive as "playout", so it is accessible via \\playoutserver\playout samba path.

Configure the storage in Nebula

In settings/storages.py:

from nebula.settings.models import StorageSettings

STORAGES = [

    # additional storages may go there, implicit "local" storages 
    # mounted using "volumes" section in the docker-compose.yml 
    # don't need to be defined, only samba shares

    StorageSettings(
        id=3,
        name="playout",
        protocol="samba",
        path="//playoutserver/playout",
        options={
            "login": "nebula",
            "password": "nebula",
            "samba_version": "3.0",
        },
    ),
]

Then create d:\media directory and ensure both the root and that directory are writable over samba.

Set <media-path>d:/media</media-path> in casparcg.config, so that directory is used instead of "media" inside the application directory.

Apply the nebula settings (make setup) and check whether the storage is mounted correctly after restarting the worker.

You may also try accessing worker shell using docker compose exec worker /bin/bash and then use ls /mnt/nebula_03 to ensure the container sees that share (and that you can write there: touch /mnt/nebula_03/media/test.txt).

LightRO commented 5 months ago

The command "make setup" gives me the following output:

root@tawoca-media001:~/nebula-tutorial# make setup
make: *** [Makefile:17: setup] Error 1

The storages.py was not yet available for me, I created it once with the content from your last comment, but entered my data for the login and the path to the share folder.

09d8d1bd7066_nebula-tutorial_backend_1 | Unable to apply new capability set.
09d8d1bd7066_nebula-tutorial_backend_1 | ERROR    nebula           Unable to mount storage 3 (playout)
3cd7b23594c2_nebula-tutorial_worker_1 | WARNING  play1            Waiting for OSC
3cd7b23594c2_nebula-tutorial_worker_1 | WARNING  play2            Waiting for OSC
6782018034cb_nebula-tutorial_channels_1 | INFO     dispatch         storage 3 (playout) is not mounted. Mounting...
6782018034cb_nebula-tutorial_channels_1 | Unable to apply new capability set.
6782018034cb_nebula-tutorial_channels_1 | ERROR    dispatch         Unable to mount storage 3 (playout)
3cd7b23594c2_nebula-tutorial_worker_1 | WARNING  play1            Waiting for OSC
3cd7b23594c2_nebula-tutorial_worker_1 | WARNING  play2            Waiting for OSC
09d8d1bd7066_nebula-tutorial_backend_1 | INFO     nebula           storage 3 (playout) is not mounted. Mounting...
09d8d1bd7066_nebula-tutorial_backend_1 | Unable to apply new capability set.
09d8d1bd7066_nebula-tutorial_backend_1 | ERROR    nebula           Unable to mount storage 3 (playout)
3cd7b23594c2_nebula-tutorial_worker_1 | WARNING  play1            Waiting for OSC
3cd7b23594c2_nebula-tutorial_worker_1 | WARNING  play2            Waiting for OSC
3cd7b23594c2_nebula-tutorial_worker_1 | WARNING  play1            Waiting for OSC
3cd7b23594c2_nebula-tutorial_worker_1 | WARNI

Maybe the following will help you, because I forgot to add this file:

docker-compose.yml:

version: "3.3"

volumes:
  db: {}

services:
  postgres:
    image: postgres
    environment:
      - "POSTGRES_USER=nebula"
      - "POSTGRES_PASSWORD=nebula"
      - "POSTGRES_DB=nebula"
    volumes:
      - "/etc/localtime:/etc/localtime:ro"
      - "db:/var/lib/postgresql/data"
    restart: unless-stopped

  redis:
    image: redis:alpine
    restart: unless-stopped

  backend:
    image: nebulabroadcast/nebula-server:latest
    ports:
      - "4455:80"
    volumes:
      - "/etc/localtime:/etc/localtime:ro"
      - "./plugins:/plugins"
      - "./settings:/settings"
      - "./storage:/mnt/nebula_01"
    depends_on:
      - redis
      - postgres

  worker:
    image: nebulabroadcast/nebula-worker:latest
    hostname: worker
    volumes:
      - "/etc/localtime:/etc/localtime:ro"
      - "./storage:/mnt/nebula_01"
    privileged: true

    depends_on:
      - backend

  channels:
    image: nebulabroadcast/nebula-worker:latest
    ports:
      - "6251:6251/udp"
      - "6252:6252/udp"
martastain commented 5 months ago

You need to update docker-compose.yml to add higher privileges to the containers to mount remote storages:

https://github.com/nebulabroadcast/nebula-tutorial/blob/main/doc/remote-storages.md

LightRO commented 5 months ago

I actually did that yesterday and then encountered 2 problems:

If I follow the first point:

services:
  backend:
    volumes:
      # Remove this line
      - ./storages:/mnt/nebula_01

  worker:
    volumes:
      # Remove this line
      - ./storages:/mnt/nebula_01

Then my config would look like this:

version: "3.3"

volumes:
  db: {}

services:
  postgres:
    image: postgres
    environment:
      - "POSTGRES_USER=nebula"
      - "POSTGRES_PASSWORD=nebula"
      - "POSTGRES_DB=nebula"
    volumes:
      - "/etc/localtime:/etc/localtime:ro"
      - "db:/var/lib/postgresql/data"
    restart: unless-stopped

  redis:
    image: redis:alpine
    restart: unless-stopped

  backend:
    image: nebulabroadcast/nebula-server:latest
    ports:
      - "4455:80"
    volumes:
      - "/etc/localtime:/etc/localtime:ro"
      - "./plugins:/plugins"
      - "./settings:/settings"
    depends_on:
      - redis
      - postgres

  worker:
    image: nebulabroadcast/nebula-worker:latest
    hostname: worker
    volumes:
      - "/etc/localtime:/etc/localtime:ro"
    privileged: true

    depends_on:
      - backend

  channels:
    image: nebulabroadcast/nebula-worker:latest
    ports:
      - "6251:6251/udp"
      - "6252:6252/udp"

by point 2:

    cap_add:
        - SYS_ADMIN
        - DAC_READ_SEARCH
    privileged: true

unfortunately I don't know where to add it. No matter where I add it, I always get an error. Even if I leave it out, an error is generated because the storage paths are no longer available.

martastain commented 5 months ago

This is 1:1 dockerfile i use on my dev machine. I hope that helps. Keep in mind i have both both production storage and playout storages mapped using samba, so nebula handles them - if you're expanding from the original docker compose, you may want to keep the first storage "bind-mounted" and use nebula-managed samba only for the playout server.

volumes:
  db: {}

services:
  postgres:
    image: postgres
    environment:
      - "POSTGRES_USER=nebula"
      - "POSTGRES_PASSWORD=nebula"
      - "POSTGRES_DB=nebula"
    volumes:
      - "/etc/localtime:/etc/localtime:ro"
      - "db:/var/lib/postgresql/data"
    restart: unless-stopped

  redis:
    image: redis:alpine
    restart: unless-stopped

  backend:
    image: nebulabroadcast/nebula-server:dev
    privileged: true
    ports:
      - "4455:80"
    volumes:
      - "/etc/localtime:/etc/localtime:ro"
      - "./plugins:/plugins"
      - "./settings:/settings"
    environment:
      - "NEBULA_LOG_LEVEL=trace"
    depends_on:
      - redis
      - postgres

  worker:
    image: nebulabroadcast/nebula-worker:dev
    hostname: worker
    privileged: true
    ports:
      - "6251:6251/udp"
      - "6252:6252/udp"

    depends_on:
      - backend
LightRO commented 5 months ago

I have taken over the parts once, this is what my config looks like now:

version: "3.3"

volumes:
  db: {}

services:
  postgres:
    image: postgres
    environment:
      - "POSTGRES_USER=nebula"
      - "POSTGRES_PASSWORD=nebula"
      - "POSTGRES_DB=nebula"
    volumes:
      - "/etc/localtime:/etc/localtime:ro"
      - "db:/var/lib/postgresql/data"
    restart: unless-stopped

  redis:
    image: redis:alpine
    restart: unless-stopped

  backend:
    image: nebulabroadcast/nebula-server:latest
    privileged: true
    ports:
      - "4455:80"
    volumes:
      - "/etc/localtime:/etc/localtime:ro"
      - "./plugins:/plugins"
      - "./settings:/settings"
    environment:
      - "NEBULA_LOG_LEVEL=trace"
    depends_on:
      - redis
      - postgres

  worker:
    image: nebulabroadcast/nebula-worker:latest
    hostname: worker
    privileged: true
    ports:
      - "6251:6251/udp"
      - "6252:6252/udp"

    depends_on:
      - backend

When I start the Docker composer now I get the following error:

root@tawoca-media001:~/nebula-tutorial# docker-compose up
WARNING: Found orphan containers (08ac33c0c3d4_nebula-tutorial_play_1, nebula-tutorial_samba_1, 6782018034cb_nebula-tutorial_channels_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
nebula-tutorial_postgres_1 is up-to-date
nebula-tutorial_redis_1 is up-to-date
Recreating 09d8d1bd7066_nebula-tutorial_backend_1 ...

ERROR: for 09d8d1bd7066_nebula-tutorial_backend_1  'ContainerConfig'

ERROR: for backend  'ContainerConfig'
Traceback (most recent call last):
  File "/usr/bin/docker-compose", line 33, in <module>
    sys.exit(load_entry_point('docker-compose==1.29.2', 'console_scripts', 'docker-compose')())
  File "/usr/lib/python3/dist-packages/compose/cli/main.py", line 81, in main
    command_func()
  File "/usr/lib/python3/dist-packages/compose/cli/main.py", line 203, in perform_command
    handler(command, command_options)
  File "/usr/lib/python3/dist-packages/compose/metrics/decorator.py", line 18, in wrapper
    result = fn(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/compose/cli/main.py", line 1186, in up
    to_attach = up(False)
  File "/usr/lib/python3/dist-packages/compose/cli/main.py", line 1166, in up
    return self.project.up(
  File "/usr/lib/python3/dist-packages/compose/project.py", line 697, in up
    results, errors = parallel.parallel_execute(
  File "/usr/lib/python3/dist-packages/compose/parallel.py", line 108, in parallel_execute
    raise error_to_reraise
  File "/usr/lib/python3/dist-packages/compose/parallel.py", line 206, in producer
    result = func(obj)
  File "/usr/lib/python3/dist-packages/compose/project.py", line 679, in do
    return service.execute_convergence_plan(
  File "/usr/lib/python3/dist-packages/compose/service.py", line 579, in execute_convergence_plan
    return self._execute_convergence_recreate(
  File "/usr/lib/python3/dist-packages/compose/service.py", line 499, in _execute_convergence_recreate
    containers, errors = parallel_execute(
  File "/usr/lib/python3/dist-packages/compose/parallel.py", line 108, in parallel_execute
    raise error_to_reraise
  File "/usr/lib/python3/dist-packages/compose/parallel.py", line 206, in producer
    result = func(obj)
  File "/usr/lib/python3/dist-packages/compose/service.py", line 494, in recreate
    return self.recreate_container(
  File "/usr/lib/python3/dist-packages/compose/service.py", line 612, in recreate_container
    new_container = self.create_container(
  File "/usr/lib/python3/dist-packages/compose/service.py", line 330, in create_container
    container_options = self._get_container_create_options(
  File "/usr/lib/python3/dist-packages/compose/service.py", line 921, in _get_container_create_options
    container_options, override_options = self._build_container_volume_options(
  File "/usr/lib/python3/dist-packages/compose/service.py", line 960, in _build_container_volume_options
    binds, affinity = merge_volume_bindings(
  File "/usr/lib/python3/dist-packages/compose/service.py", line 1548, in merge_volume_bindings
    old_volumes, old_mounts = get_container_data_volumes(
  File "/usr/lib/python3/dist-packages/compose/service.py", line 1579, in get_container_data_volumes
    container.image_config['ContainerConfig'].get('Volumes') or {}
KeyError: 'ContainerConfig'
root@tawoca-media001:~/nebula-tutorial# 
martastain commented 5 months ago

IDK. this comes from docker, not Nebula.... maybe this? https://askubuntu.com/questions/1508129/docker-compose-giving-containerconfig-errors-after-update-today

LightRO commented 5 months ago

The server starts again with the docker v2 syntax. Below I can also access the directory where the media files should be located (Windows Server CasparCG)

root@tawoca-media001:~/nebula-tutorial# docker compose exec worker /bin/bash
root@worker:/opt/nebula# ls /mnt/nebula_03
media  media.dir
root@worker:/opt/nebula# ls /mnt/nebula_03/media
fill  graphics  jingles  movies  nebula-1.mxf  songs  stories  test.txt  trailers
root@worker:/opt/nebula# ls /mnt/nebula_03/media/movies/
NowBFNextTMOS.mp4  OpenerFull.mp4
root@worker:/opt/nebula#

However, it does not load the 2nd MP4 under Incomming.

image

Since the current file that is displayed is corrupt, I have tried to insert the 2nd file so that I can test whether it now works. But unfortunately it is not displayed.

The server output now looks as follows:

root@tawoca-media001:~/nebula-tutorial# docker compose up
WARN[0000] Found orphan containers ([nebula-tutorial_samba_1 6782018034cb_nebula-tutorial_channels_1 08ac33c0c3d4_nebula-tutorial_play_1]) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
[+] Running 4/0
 ✔ Container nebula-tutorial-redis-1     Created                                                                                                                                                           0.0s
 ✔ Container nebula-tutorial-postgres-1  Created                                                                                                                                                           0.0s
 ✔ Container nebula-tutorial-backend-1   Created                                                                                                                                                           0.0s
 ✔ Container nebula-tutorial-worker-1    Created                                                                                                                                                           0.0s
Attaching to backend-1, postgres-1, redis-1, worker-1
redis-1     | 1:C 10 May 2024 12:39:57.092 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis-1     | 1:C 10 May 2024 12:39:57.092 * oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis-1     | 1:C 10 May 2024 12:39:57.092 * Redis version=7.2.4, bits=64, commit=00000000, modified=0, pid=1, just started
redis-1     | 1:C 10 May 2024 12:39:57.092 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis-1     | 1:M 10 May 2024 12:39:57.093 * monotonic clock: POSIX clock_gettime
redis-1     | 1:M 10 May 2024 12:39:57.094 * Running mode=standalone, port=6379.
redis-1     | 1:M 10 May 2024 12:39:57.099 * Server initialized
redis-1     | 1:M 10 May 2024 12:39:57.099 * Loading RDB produced by version 7.2.4
redis-1     | 1:M 10 May 2024 12:39:57.099 * RDB age 13 seconds
redis-1     | 1:M 10 May 2024 12:39:57.099 * RDB memory usage when created 0.94 Mb
redis-1     | 1:M 10 May 2024 12:39:57.100 * Done loading RDB, keys loaded: 3, keys expired: 0.
redis-1     | 1:M 10 May 2024 12:39:57.100 * DB loaded from disk: 0.000 seconds
redis-1     | 1:M 10 May 2024 12:39:57.100 * Ready to accept connections tcp
postgres-1  |
postgres-1  | PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres-1  |
postgres-1  | 2024-05-10 12:39:57.344 UTC [1] LOG:  starting PostgreSQL 16.2 (Debian 16.2-1.pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit
postgres-1  | 2024-05-10 12:39:57.344 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
postgres-1  | 2024-05-10 12:39:57.344 UTC [1] LOG:  listening on IPv6 address "::", port 5432
postgres-1  | 2024-05-10 12:39:57.347 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres-1  | 2024-05-10 12:39:57.353 UTC [29] LOG:  database system was shut down at 2024-05-10 12:39:44 UTC
postgres-1  | 2024-05-10 12:39:57.361 UTC [1] LOG:  database system is ready to accept connections
backend-1   |
backend-1   | Starting the server...
backend-1   |
backend-1   | DEBUG    asyncio          Using selector: EpollSelector
worker-1    | INFO     dispatch         Starting StorageMonitor
worker-1    | DEBUG    dispatch         Connecting messaging to redis://redis
worker-1    | INFO     dispatch         Starting ServiceMonitor
worker-1    | INFO     dispatch         storage 3 (playout) is not mounted. Mounting...
worker-1    | DEBUG    dispatch         AutoStarting service ID 3 (meta)
worker-1    | INFO     dispatch         Starting service ID 3 (meta)
worker-1    | DEBUG    dispatch         AutoStarting service ID 2 (watch)
worker-1    | INFO     dispatch         Starting service ID 2 (watch)
worker-1    | DEBUG    dispatch         AutoStarting service ID 12 (play2)
worker-1    | INFO     dispatch         Starting service ID 12 (play2)
worker-1    | DEBUG    dispatch         AutoStarting service ID 11 (play1)
worker-1    | INFO     dispatch         Starting service ID 11 (play1)
worker-1    | DEBUG    dispatch         AutoStarting service ID 1 (broker)
worker-1    | INFO     dispatch         Starting service ID 1 (broker)
worker-1    | DEBUG    dispatch         AutoStarting service ID 4 (conv)
worker-1    | INFO     dispatch         Starting service ID 4 (conv)
worker-1    | DEBUG    dispatch         AutoStarting service ID 14 (psm)
worker-1    | INFO     dispatch         Starting service ID 14 (psm)
backend-1   | INFO     setup            Connected to the database
backend-1   | INFO     setup            Creating database schema
backend-1   | TRACE    setup            Loading settings overrides
backend-1   | INFO     setup            Found overrides for actions
backend-1   | INFO     setup            Found overrides for storages
backend-1   | INFO     setup            Found overrides for channels
backend-1   | INFO     setup            Found overrides for services
backend-1   | TRACE    setup            Validating settings template
backend-1   | INFO     setup            Applying system settings
backend-1   | TRACE    setup            Saved system settings
backend-1   | TRACE    setup            Saved 9 views
backend-1   | TRACE    setup            Saved 13 folders
backend-1   | TRACE    setup            Saved 2 playout channels
backend-1   | TRACE    setup            Saved 1 storages
worker-1    | INFO     dispatch         storage 3 (playout) mounted successfully
backend-1   | TRACE    setup            Saved 7 services
backend-1   | TRACE    setup            Saved 2 actions
backend-1   | TRACE    setup            Saved 137 meta types
backend-1   | DEBUG    httpx            load_ssl_context verify=True cert=None trust_env=True http2=False
backend-1   | DEBUG    httpx            load_verify_locations cafile='/usr/local/lib/python3.12/site-packages/certifi/cacert.pem'
backend-1   | DEBUG    httpcore.connect connect_tcp.started host='cs.nbla.xyz' port=443 local_address=None timeout=5.0 socket_options=None
backend-1   | DEBUG    httpcore.connect connect_tcp.complete return_value=<httpcore._backends.anyio.AnyIOStream object at 0x7f0ebab05d00>
backend-1   | DEBUG    httpcore.connect start_tls.started ssl_context=<ssl.SSLContext object at 0x7f0eba9fbc50> server_hostname='cs.nbla.xyz' timeout=5.0
backend-1   | DEBUG    httpcore.connect start_tls.complete return_value=<httpcore._backends.anyio.AnyIOStream object at 0x7f0eba9e1220>
backend-1   | DEBUG    httpcore.http11  send_request_headers.started request=<Request [b'GET']>
backend-1   | DEBUG    httpcore.http11  send_request_headers.complete
backend-1   | DEBUG    httpcore.http11  send_request_body.started request=<Request [b'GET']>
backend-1   | DEBUG    httpcore.http11  send_request_body.complete
backend-1   | DEBUG    httpcore.http11  receive_response_headers.started request=<Request [b'GET']>
backend-1   | DEBUG    httpcore.http11  receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Accept-Ranges', b'bytes'), (b'Alt-Svc', b'h3=":443"; ma=2592000'), (b'Content-Length', b'440440'), (b'Content-Type', b'application/json'), (b'Etag', b'"s6zhnr9fug"'), (b'Last-Modified', b'Tue, 09 Jan 2024 07:47:51 GMT'), (b'Date', b'Fri, 10 May 2024 12:40:00 GMT')])
backend-1   | INFO     httpx            HTTP Request: GET https://cs.nbla.xyz/dump "HTTP/1.1 200 OK"
backend-1   | DEBUG    httpcore.http11  receive_response_body.started request=<Request [b'GET']>
backend-1   | DEBUG    httpcore.http11  receive_response_body.complete
backend-1   | DEBUG    httpcore.http11  response_closed.started
backend-1   | DEBUG    httpcore.http11  response_closed.complete
backend-1   | DEBUG    httpcore.connect close.started
backend-1   | DEBUG    httpcore.connect close.complete
backend-1   | TRACE    setup            Skipping unused classification scheme: urn:ebu:metadata-cs:ContractTypeCodeCS
backend-1   | TRACE    setup            Saved 17 classifications
backend-1   |
backend-1   | Create a new administrator
backend-1   |
backend-1   | This will create a new user with admin rights in the database
backend-1   | Hit Ctrl-C to cancel
backend-1   |
backend-1   |
backend-1   | Change user password
backend-1   |
backend-1   | This will change the password of an existing user
backend-1   | Hit Ctrl-C to cancel
backend-1   |
backend-1   |
backend-1   | Create a new administrator
backend-1   |
backend-1   | This will create a new user with admin rights in the database
backend-1   | Hit Ctrl-C to cancel
backend-1   |
backend-1   | TRACE    nebula           Adding endpoint /api/invalidate_session
backend-1   | TRACE    nebula           Adding endpoint /api/sessions
backend-1   | TRACE    nebula           Adding endpoint /proxy/{id_asset}
backend-1   | TRACE    nebula           Adding endpoint /api/order
backend-1   | TRACE    nebula           Adding endpoint /api/playout
backend-1   | TRACE    nebula           Adding endpoint /api/save_user
backend-1   | TRACE    nebula           Adding endpoint /api/user_list
backend-1   | TRACE    nebula           Adding endpoint /api/solve
backend-1   | TRACE    nebula           Adding endpoint /api/init
backend-1   | TRACE    nebula           Adding endpoint /api/scheduler
backend-1   | TRACE    nebula           Adding endpoint /upload/{id_asset}
backend-1   | TRACE    nebula           Adding endpoint /api/actions
backend-1   | TRACE    nebula           Adding endpoint /api/jobs
backend-1   | TRACE    nebula           Adding endpoint /api/send
backend-1   | TRACE    nebula           Adding endpoint /api/delete
backend-1   | TRACE    nebula           Adding endpoint /api/login
backend-1   | TRACE    nebula           Adding endpoint /api/logout
backend-1   | TRACE    nebula           Adding endpoint /api/password
backend-1   | TRACE    nebula           Adding endpoint /api/browse
backend-1   | TRACE    nebula           Adding endpoint /api/ops
backend-1   | TRACE    nebula           Adding endpoint /api/set
backend-1   | TRACE    nebula           Adding endpoint /api/get
backend-1   | TRACE    nebula           Adding endpoint /api/rundown
backend-1   | TRACE    nebula           Adding endpoint /api/services
backend-1   | TRACE    nebula           Loading settings
worker-1    | DEBUG    broker           Initializing service ID 1
worker-1    | DEBUG    broker           Connecting messaging to redis://redis
worker-1    | INFO     broker           Service started
backend-1   | INFO     nebula           Server started
backend-1   | INFO     nebula           storage 3 (playout) is not mounted. Mounting...
worker-1    | DEBUG    psm              Initializing service ID 14
worker-1    | DEBUG    psm              Connecting messaging to redis://redis
worker-1    | INFO     psm              Service started
backend-1   | INFO     nebula           storage 3 (playout) mounted successfully
worker-1    | DEBUG    play1            Initializing service ID 11
worker-1    | DEBUG    meta             Initializing service ID 3
worker-1    | DEBUG    meta             Following actions will be restarted on source update: all
worker-1    | DEBUG    meta             Connecting messaging to redis://redisDEBUG    play1            Connecting messaging to redis://redis
worker-1    |
worker-1    | INFO     play1            Starting OSC listener on port 6251
worker-1    | INFO     meta             Service started
worker-1    | INFO     play1            Using port 42101 for the HTTP interface.
worker-1    | DEBUG    play2            Initializing service ID 12
worker-1    | WARNING  play1            Playout plugins directory  does not exist
worker-1    | DEBUG    play2            Connecting messaging to redis://redis
worker-1    | INFO     play2            Starting OSC listener on port 6252
worker-1    | DEBUG    watch            Initializing service ID 2
worker-1    | INFO     play2            Using port 42102 for the HTTP interface.
worker-1    | INFO     play1            Service started
worker-1    | DEBUG    watch            Connecting messaging to redis://redis
worker-1    | WARNING  play2            Playout plugins directory  does not exist
worker-1    | INFO     watch            Service started
worker-1    | INFO     play2            Service started
worker-1    | DEBUG    conv             Initializing service ID 4
worker-1    | DEBUG    conv             Connecting messaging to redis://redis
worker-1    | DEBUG    conv             Registering action proxy
worker-1    | DEBUG    conv             Registering action playout
worker-1    | INFO     conv             Service started
worker-1    | WARNING  play1            Waiting for OSC
worker-1    | WARNING  play2            Waiting for OSC
martastain commented 5 months ago

1) your target directory is the same as the original source. i don't understand your storage layout TBH 2) that worker log does not show you are starting the playback (from firefly)

LightRO commented 5 months ago

I have now made some progress: firefly now loads the files from the correct folder (share folder). When I now send the video to send to playout, I get the following error:

worker-1    | WARNING  play1            Waiting for OSC
worker-1    | WARNING  play2            Waiting for OSC
worker-1    | INFO     conv             Got job ID:5 [playout@asset ID:3 (NowBFNextTMOS)]
worker-1    | DEBUG    conv             Configuring task 1 of 1
worker-1    | INFO     conv             Starting task 1 of 1
worker-1    | INFO     conv             Executing ffmpeg -hide_banner -y -i /mnt/nebula_03/media.dir/movies/NowBFNextTMOS.mp4 -filter:a loudnorm=I=-23 -ar 48000 -c:v copy -c:a pcm_s16le /mnt/nebula_03/media/nebula-3.mxf
worker-1    | DEBUG    conv             Finalizing task 1 of 1
worker-1    | ERROR    conv             b''
worker-1    | ERROR    conv             job ID:5 [playout@asset ID:3 (NowBFNextTMOS)]: Failed to finalize task 1: Encoding failed
worker-1    | ERROR    conv
worker-1    |
worker-1    | Traceback (most recent call last):
worker-1    |       File "/opt/nebula/services/conv/conv.py", line 134, in on_main
worker-1    |         self.encoder.finalize()
worker-1    |       File "/opt/nebula/services/conv/ffmpeg.py", line 165, in finalize
worker-1    |         raise ConversionError("Encoding failed")
worker-1    |     services.conv.common.ConversionError: Encoding failed
worker-1    |
worker-1    | WARNING  play1            Waiting for OSC

Send to Proxy no longer works either, it says that the write permissions are missing, but the question is whether I really need it, since I have playout.

martastain commented 5 months ago

mxf container is very picky regarding the essence. if you need to playout arbitrary media files, you may need to use a different profile for sending to playout (for example use mov container, or force transcoding).

i don't know your intended workflow. it is indeed possible not to create them at all - they are crucial for reviewing and trimming clips for example, but if you are sure about the content on your production storage, it is completely fine to disable proxy creation. In that case you may also want to set ui_asset_preview flag in your server settings to False, to disable low-res viewer in the web application.

LightRO commented 5 months ago

Ok, I'll try to explain a bit what I have in mind:

I want to set up a 24h stream for Twitch via rtmp. But I also want to control exactly what runs when with commercial breaks etc..

At that time I had finished the constellation with the older nebula, which also worked but lost the configurations due to lack of backups and an HDD crash.

The constellation with me: I have Ubuntu 22.04 LTS running on a VM. Nebula is currently running on it. I would have liked to run CasparCG on it as well, but it is very bitchy in terms of Linux and VM. So I run it on the subsystem (Windows).

In the CasparCG folder of the Windows subsystem is the folder media. This has been shared for the network. I want to be able to put the videos in there, which Nebula then pulls and processes. This is in D:\Casparcg\media\media.dir\ and is accessible on the Linux server via /mnt/nebula_03/media.dir/. There is also another directory, namely D:\casparcg\media\media\. This is where the .mov files from the playout are saved. This now also works quite well.

Now, however, the CasparCG should play back what is currently being played in the appointment scheduler.
This is not working yet, I am now getting the error:

backend-1   | INFO     admin            Saved item id=14
backend-1   | INFO     admin            Changed order in bins [3]
backend-1   | TRACE    admin            New duration of bin id=3 is 00:00:14.91 (1 items)
backend-1   | INFO     admin            Saved bin id=3
worker-1    | WARNING  play2            Waiting for OSC
worker-1    | INFO     psm              Set asset ID:3 (NowBFNextTMOS) playout status to CREATING
worker-1    | DEBUG    psm              Saving asset ID:3 (NowBFNextTMOS)
backend-1   | DEBUG    urllib3.connecti Starting new HTTP connection (1): worker:42101
worker-1    | DEBUG    play1            Executing AMCP: LOADBG 1-10 nebula-3 AUTO
worker-1    | ERROR    play1
worker-1    |
worker-1    | Traceback (most recent call last):
worker-1    |       File "/opt/nebula/services/play/request_handler.py", line 57, in do_POST
worker-1    |         result = self.server.methods[method](**postvars)
worker-1    |                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
worker-1    |       File "/opt/nebula/services/play/play.py", line 181, in cue
worker-1    |         return self.controller.cue(item=item, **kwargs)
worker-1    |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
worker-1    |       File "/opt/nebula/services/play/casparcg/caspar_controller.py", line 263, in cue
worker-1    |         self.query(query)
worker-1    |       File "/opt/nebula/services/play/casparcg/caspar_controller.py", line 105, in query
worker-1    |         return self.cmdc.query(*args, **kwargs)
worker-1    |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
worker-1    |       File "/opt/nebula/services/play/casparcg/amcp.py", line 115, in query
worker-1    |         raise e
worker-1    |       File "/opt/nebula/services/play/casparcg/amcp.py", line 113, in query
worker-1    |         raise CasparException(f"{result} error in CasparCG query '{query}'")
worker-1    |     services.play.casparcg.amcp.CasparException: 404 LOADBG FAILED error in CasparCG query 'LOADBG 1-10 nebula-3 AUTO'
worker-1    |
backend-1   | DEBUG    urllib3.connecti http://worker:42101 "POST /cue HTTP/1.1" 500 None
backend-1   | ERROR    nebula           EXCEPTION: 500 404 LOADBG FAILED error in CasparCG query 'LOADBG 1-10 nebula-3 AUTO'
worker-1    | WARNING  play1            Waiting for OSC
worker-1    | WARNING  play2            Waiting for OSC
worker-1    | INFO     psm              Set asset ID:3 (NowBFNextTMOS) playout status to ONLINE
worker-1    | DEBUG    psm              Saving asset ID:3 (NowBFNextTMOS)
worker-1    | WARNING  play1            Waiting for OSC

Edit: The problem above is the last one I still have, all other problems are now solved. If the problem is also solved, I would add all changed files here, for others who might have the same problem.

By the way, I am using CasparCG version 2.4.0.

LightRO commented 4 months ago

Now I have made it so far, CasparCG reacts to Nebula. The last problem I have is probably due to Nebula and that is that what is being played now or what will be played next is not displayed. Although double clicking on one of the rundown files sends the video to casparcg.

image

I have been able to eliminate all other problems, many of which are settings that were not recognizable at first glance. Your advice with the 2 docker syntaxes also helped me a lot.

but maybe you know how to fix the last error :D

martastain commented 4 months ago

Great. You're really close! The last step - according to your previous log is to make OSC connection working (This is why nebula does not receive information about what's playing).

Please refer to https://github.com/nebulabroadcast/nebula-tutorial/blob/main/doc/casparcg.md and check:

Keep in mind that after changing the configuration in docker-compose.yml, the container has to be re-created using docker compose up worker --build --detach.

LightRO commented 4 months ago

That's a bit trickier, I'll post the settings I've set that would be relevant for this:

channels.py

from nebula.settings.models import PlayoutChannelSettings, AcceptModel

scheduler_accepts = AcceptModel(folders=[1, 2])
rundown_accepts = AcceptModel(folders=[1, 3, 4, 5, 6, 7, 8, 9, 10])

channel1 = PlayoutChannelSettings(
    id=1,
    name="Channel 1",
    fps=25.0,
    plugins=[],
    solvers=[],
    day_start=(7, 0),
    scheduler_accepts=scheduler_accepts,
    rundown_accepts=rundown_accepts,
    rundown_columns=[],
    send_action=2,
    engine="casparcg",
    allow_remote=False,
    controller_host="worker",
    controller_port=42101,
    playout_storage=3,
    playout_dir="media",
    playout_container="mov",
    config={
        "caspar_host": "192.168.178.59",
        "caspar_port": 5250,
        "caspar_osc_port": 6251,
        "caspar_channel": 1,
        "caspar_feed_layer": 10,
     },
)

# Configure second channel similarly

channel2 = PlayoutChannelSettings(
    id=2,
    name="Channel 2",
    fps=25.0,
    plugins=[],
    solvers=[],
    day_start=(7, 0),
    scheduler_accepts=scheduler_accepts,
    rundown_accepts=rundown_accepts,
    rundown_columns=[],
    send_action=2,
    engine="casparcg",
    allow_remote=False,
    controller_host="worker",
    controller_port=42102,
    playout_storage=3,
    playout_dir="media",
    playout_container="mov",
    config={
        "caspar_host": "192.168.178.59",
        "caspar_port": 5250,
        "caspar_osc_port": 6252,
        "caspar_channel": 2,
        "caspar_feed_layer": 10,
    },
)

CHANNELS = [channel1, channel2]

docker-composer.yml

volumes:
  db: {}

services:
  postgres:
    image: postgres
    environment:
      - "POSTGRES_USER=nebula"
      - "POSTGRES_PASSWORD=nebula"
      - "POSTGRES_DB=nebula"
    volumes:
      - "/etc/localtime:/etc/localtime:ro"
      - "db:/var/lib/postgresql/data"
    restart: unless-stopped

  redis:
    image: redis:alpine
    restart: unless-stopped

  backend:
    image: nebulabroadcast/nebula-server:latest
    privileged: true
    ports:
      - "4455:80"
    volumes:
      - "/etc/localtime:/etc/localtime:ro"
      - "./plugins:/plugins"
      - "./settings:/settings"
      - "./storage:/mnt/nebula_01"
    environment:
      - "NEBULA_LOG_LEVEL=trace"
    depends_on:
      - redis
      - postgres

  worker:
    image: nebulabroadcast/nebula-worker:latest
    hostname: worker
    privileged: true
    ports:
      - "6251:6251/udp"
      - "6252:6252/udp"
    volumes:
      - "./storage:/mnt/nebula_01"
    depends_on:
      - backend

And the CasparCG Config:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <paths>
    <media-path>media/media</media-path>
    <log-path>log/</log-path>
    <data-path>data/</data-path>
    <template-path>template/</template-path>
    <font-path>font/</font-path>
  </paths>
  <lock-clear-phrase>secret</lock-clear-phrase>
  <channels>
    <channel>
      <video-mode>1080p5000</video-mode>
      <consumers>
        <screen/>
        <system-audio/>
      </consumers>
    </channel>
  </channels>
  <controllers>
    <tcp>
      <port>5250</port>
      <protocol>AMCP</protocol>
    </tcp>
  </controllers>
  <amcp>
    <media-server>
      <host>localhost</host>
      <port>8000</port>
    </media-server>
  </amcp>
</configuration>
<osc>
  <predefined-clients>
    <predefined-client>
      <address>192.168.178.56</address>
      <port>6251</port>
    </predefined-client>
    <predefined-client>
      <address>192.168.178.56</address>
      <port>6252</port>
    </predefined-client>
  </predefined-clients>
</osc>

the firewall activations in Windows are also set and in Ubuntu the two ports are active and the firewall is also activated for the ports.

It still doesn't want to work somehow. I also used the command after the configuration changes you wrote in the last comment.

Could it be due to the casparcg version?

martastain commented 4 months ago

LGTM.... this is hard to debug, honestly. I'd suspect firewall/host IP, but hard to say. The "Waiting for OSC" log message shows when the OSC connection is not established.

LightRO commented 4 months ago

I think I have the error but I don't know how to fix it yet.

CasparCG has one CasparCG.exe and one scanner.exe. Both must be started. Despite the configuration, the port does not seem to be used. I therefore assume that OSC is not running.

How is CasparCG working for you? Because I can't get any further with this topic. Something is wrong with the OSC and I don't quite understand what the problem is. The Scanner.exe is started and works without errors.

LightRO commented 4 months ago

Ok I have now monitored the traffic with Wireshark and have discovered the following. It doesn't matter which port you enter in the OSC configuration in CasparCG 2.4.0, it is always port 6250 I can't say whether this is a bug on the part of CasparCG, what I can say is that it occurs with CasparCG version 2.1.0 - 2.4.0.

I have adapted Nebula at least with channel 1 to this port and lo and behold, it works.

I will put the configs together and make them available to you in case someone has the same constellation.