Closed cpfarhood closed 5 months ago
This issue is stale because it has been open for 30 days with no activity.
This issue was closed because it has been inactive for 14 days since being marked as stale.
This issue is stale because it has been open for 30 days with no activity.
This issue was closed because it has been inactive for 14 days since being marked as stale.
This issue is stale because it has been open for 30 days with no activity.
This issue was closed because it has been inactive for 14 days since being marked as stale.
This issue is stale because it has been open for 30 days with no activity.
Could be related to the fact that Plex's license file for EAE had expired. Latest release has an up to date version that is worth a shot.
This issue is stale because it has been open for 30 days with no activity.
This issue was closed because it has been inactive for 14 days since being marked as stale.
Describe the bug When starting a new job with some content it will start transcoding on worker0, then abruptly stop the job and move to worker1, which then succeeds at starting the stream. This creates big delays in starting the stream.
To Reproduce Steps to reproduce the behavior: 1) Start playback 2) Watch worker load/logs 3) ? 4) Profit
Expected behavior The first transcode appears to be working, it should be played from there and not killed/restarted.
Screenshots If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
Additional context I'm seeing this error in the logs on both workers [truehd_eae @ 0x7f88d9092900] EAE timeout! EAE not running, or wrong folder? Could not read '/tmp/pms-859148ad-6026-4728-9208-582611f8417f/EasyAudioEncoder/Convert to WAV (to 8ch or less)/4rwyr1unhk54qu7m6awant1g_1142-0-224.wav'
logs from orchestrator Client connected: rm-MICzmmeLsJqQHAAAX Registered new job poster: 100a2e9c-e3eb-4e7b-a6d8-713e27963141|clusterplex-pms-0 Creating single task for the job Queueing job 03e92f60-352a-4514-b774-cfe4e1777ae6 Queueing task 7f0c1177-2d00-4c11-8de3-474ca80f40c4 Running task 7f0c1177-2d00-4c11-8de3-474ca80f40c4 Forwarding work request to ad31f163-68b7-4b58-84bd-92587b3a566a|clusterplex-worker-0 Received update for task 7f0c1177-2d00-4c11-8de3-474ca80f40c4, status: received Received update for task 7f0c1177-2d00-4c11-8de3-474ca80f40c4, status: inprogress Client disconnected: rm-MICzmmeLsJqQHAAAX Removing job-poster 100a2e9c-e3eb-4e7b-a6d8-713e27963141|clusterplex-pms-0 from pool Killing job 03e92f60-352a-4514-b774-cfe4e1777ae6 Telling worker ad31f163-68b7-4b58-84bd-92587b3a566a|clusterplex-worker-0 to kill task 7f0c1177-2d00-4c11-8de3-474ca80f40c4 Job 03e92f60-352a-4514-b774-cfe4e1777ae6 killed Client connected: 7BE9s3Mn0HnnXp47AAAZ Registered new job poster: 2749ce8b-4165-4f7a-b463-08951dd29248|clusterplex-pms-0 Creating single task for the job Queueing job 6cc68c95-5e08-49a2-9f9f-58c215a9f2f4 Queueing task c83ea0a6-7853-4807-9eb2-cd98738dcdfe Running task c83ea0a6-7853-4807-9eb2-cd98738dcdfe Forwarding work request to 046e13ea-f8cd-4c72-b88a-b8b31bff57df|clusterplex-worker-1 Received update for task c83ea0a6-7853-4807-9eb2-cd98738dcdfe, status: received Received update for task c83ea0a6-7853-4807-9eb2-cd98738dcdfe, status: inprogress Received update for task 7f0c1177-2d00-4c11-8de3-474ca80f40c4, status: done Discarding task update for 7f0c1177-2d00-4c11-8de3-474ca80f40c4 Client disconnected: 7BE9s3Mn0HnnXp47AAAZ Removing job-poster 2749ce8b-4165-4f7a-b463-08951dd29248|clusterplex-pms-0 from pool Killing job 6cc68c95-5e08-49a2-9f9f-58c215a9f2f4 Telling worker 046e13ea-f8cd-4c72-b88a-b8b31bff57df|clusterplex-worker-1 to kill task c83ea0a6-7853-4807-9eb2-cd98738dcdfe Job 6cc68c95-5e08-49a2-9f9f-58c215a9f2f4 killed Received update for task c83ea0a6-7853-4807-9eb2-cd98738dcdfe, status: done Discarding task update for c83ea0a6-7853-4807-9eb2-cd98738dcdfe