blindsidenetworks / scalelite

Scalable load balancer for BigBlueButton.
GNU Affero General Public License v3.0
465 stars 249 forks source link

Multitenancy: getRecordings API call does only return tenant-id as metadata #987

Closed Ithanil closed 11 months ago

Ithanil commented 11 months ago

Describe the bug

When Multitenancy is enabled and tenants are added, the response XML for getRecordings does only contain the tenant-id as metadata. Especially 'gl-listed' is not included, which renders all recordings "unlisted" in Greenlight.

To Reproduce

Enable MT with recordings. Look at recording list in GL or the response from getRecordings calls.

Deployment:

  1. Deployment:
  2. Versions:
  3. Tools used for reproducing the issue: SL v1.5.1.2, custom docker-compose

Expected behavior

The recording metadata should be returned correctly.

Additional context

Uncommenting https://github.com/blindsidenetworks/scalelite/blob/2c3b29e04196a55d907b21d4737846cfa5e5f064/app/controllers/bigbluebutton_api_controller.rb#L337C17-L337C17 "fixes" the issue. I'm looking into a better way to fix this, but not sure if I understand the problem&code enough.

farhatahmad commented 11 months ago

Good catch - working on a fix

farhatahmad commented 11 months ago

https://github.com/blindsidenetworks/scalelite/releases/tag/v1.5.1.3

Ithanil commented 11 months ago

The issue needs to be reopened, because the fix - although fixing the issue itself - seems to lead to a huge/inefficient SELECT query with a WHERE clause containing every single recording id. It appears in the log files like:

SELECT "recordings"."id" AS t0_r0, "recordings"."record_id" AS t0_r1, "recordings"."meeting_id" AS t0_r2, "recordings"."name" AS t0_r3, "recordings"."published" AS t0_r4, "recordings"."participants" AS t0_r5, "recordings"."state" AS t0_r6, "recordings"."starttime" AS t0_r7, "recordings"."endtime" AS t0_r8, "recordings"."deleted_at" AS t0_r9, "recordings"."publish_updated" AS t0_r10, "recordings"."protected" AS t0_r11, "playback_formats"."id" AS t1_r0, "playback_formats"."recording_id" AS t1_r1, "playback_formats"."format" AS t1_r2, "playback_formats"."url" AS t1_r3, "playback_formats"."length" AS t1_r4, "playback_formats"."processing_time" AS t1_r5, "thumbnails"."id" AS t2_r0, "thumbnails"."playback_format_id" AS t2_r1, "thumbnails"."width" AS t2_r2, "thumbnails"."height" AS t2_r3, "thumbnails"."alt" AS t2_r4, "thumbnails"."url" AS t2_r5, "thumbnails"."sequence" AS t2_r6, "metadata"."id" AS t3_r0, "metadata"."recording_id" AS t3_r1, "metadata"."key" AS t3_r2, "metadata"."value" AS t3_r3 FROM "recordings" LEFT OUTER JOIN "playback_formats" ON "playback_formats"."recording_id" = "recordings"."id" LEFT OUTER JOIN "thumbnails" ON "thumbnails"."playback_format_id" = "playback_formats"."id" LEFT OUTER JOIN "metadata" ON "metadata"."recording_id" = "recordings"."id" WHERE "recordings"."id" IN ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14, $15, $16, $17, $18, ...... ...... ...... , $18282,

Didn't notice it on our test instance, but on production the log files get huge very quickly. ;-)