cta-wave / WMAS

Test suite for the Web Media API Snapshot Specification
Other
18 stars 9 forks source link

Limitation on how many tests can be excluded or included #78

Open yanj-github opened 1 year ago

yanj-github commented 1 year ago

Ref: https://github.com/cta-wave/WMAS/issues/77 As far as I understnad there shouldn't be any limitation on how many tests can be excluded. However, when I am entering longer list of excluded tests (more than 100) the WAMS server hangs and won't process further. Is there a way for user to configure a test session with longer excluded or included test list please?

FritzHeiden commented 1 year ago

Did you try using a local deployment using https://github.com/cta-wave/WMAS-deploy? The hosted instance is intended for demo usage, not for production.

yanj-github commented 1 year ago

Thanks @FritzHeiden I have tried already on a local deployment version wmas2020 and it doesnt work for long list of tests either. I was trying to run a subset of test from https://github.com/cta-wave/WMAS-subset/blob/main/subset_of_tests.json which contrains a long list of test to be included.

pshorrock commented 1 year ago

@FritzHeiden @louaybassbouss could we please get an update on this issue. @yanj-github is seeing an issue with trying to run any subset of tests greater than 100 in number regardless of where the WMAS TR is installed. Can you please confirm if you also see this issue, and if so, if there is a planned approach to resolve it (we are assuming and HbbTV subset will be run using the json file output from the HbbTV activities as an input, if not can you please confirm what the approach needs to be taken? How do you currently successfully run a large subset of tests (greater than 100 tests either added or excluded)?

FritzHeiden commented 1 year ago

@yanj-github @pshorrock I pushed a fix for this issue to wmas2021 branch

How do you currently successfully run a large subset of tests (greater than 100 tests either added or excluded)?

We are running large subsets by setting them up in the file system using shell scripts. This, however, is only applicable because the desired subset is fixed for a single wmas release. The exclude list was intended for removing problematic tests, which never exceeded 100, so we never came across this performance issue. Thanks for reporting.

yanj-github commented 1 year ago

Tested and working on local set up. Can you help to fix it on wmas2021, 2019 and 2018 please?

yanj-github commented 1 year ago

@FritzHeiden Is this fixed on all other versions and applied to the cloud hosted version please?

FritzHeiden commented 11 months ago

I applied the fix to wmas2021, wmas2020 and wmas2019 code base and redeployed the new versions. For wmas2018 to apply the fix some more testing required

bobcampbell-resillion commented 9 months ago

Hi, what is the ETA on this ticket being resolved?

HbbTV 2.0.3 references WMAS 2018 and therefore receivers that would be expected to pass a subset defined by HbbTV need to exclude the tests in that list. It won't be a daily variable list, but it will change outside the cycle of WMAS updates - because HbbTV itself hasn't even finally approved/agreed the official subset - and I anticipate it will be updated from time to time.

Same problem for ATSC - they'll use the same subset but its conceivable they'll identify APIs/tests that need to be included/excluded independently of HbbTV.

And then I might as a manufacturer want to run a longer list, but not all tests in areas I know I don't support.

So for lots of reasons, users of the WMAS test tools need to be able to define an exclude list that isn't tied to the version of WMAS, and isn't too complex to apply to their running instance.. I guess its fine if that level of complexity isn't supported on the "non" production hosted versions but it needs to be supported somehow in deployed versions.

Thanks

FritzHeiden commented 7 months ago

I created a PR to update the test runner of wmas2018: https://github.com/cta-wave/WMAS/pull/85 As wmas2018 is rather old and it was intended to keep it up to date with the latest runner there is quiet a diff between the versions. I will have to perform a few test runs to see if everything works as expected.