clamsproject / mmif-visualizer

A web site to visualize MultiMedia Interchange Format json
Apache License 2.0
2 stars 1 forks source link

"Abstract view type" checking is very naive #41

Open haydenmccormick opened 4 weeks ago

haydenmccormick commented 4 weeks ago

Because

Views are rendered as appropriate tabs by attempting to ascertain the "abstract view type", e.g. OCR, ASR, NER, etc. This is done extremely naively, especially when checking for OCR views, since there is no standardized schema for what the output of a "thumbnail" view should look like. For example, these views could output:

As a Band-Aid solution, the OCR abstract view type checker just pattern matches against known OCR/CV apps to determine if a view is a thumbnail tab. This is obviously not great for long-term development, since it would require the list to be manually updated with every new OCR app, and it wouldn't support third-party (non-CLAMS-developed) apps.

Done when

This could be addressed in a few different ways:

  1. A standardized "abstract view type" as part of the metadata for MMIF views, defined by each CLAMS app separately
  2. Allowing the user to add views as tabs manually and specify their abstract type from within the app
  3. A much more advanced rule-based type checking system (if there is some consistent and maintained pattern)

There are most likely other solutions that may be more elegant/robust -- I'm leaving this issue open for further discussion on potential strategies/methods for dealing with this problem

Additional context

No response

keighrim commented 4 weeks ago

I believe this issue to be elevated to the SDK level.


I like the idea#1. We can pre-define "app patterns" in clams-python SDK, and each individual app can be subclass(es) of this app patterns (explicitly displayed in app metadata maybe). We can then re-use the app patterns for evaluation repo. Currently we have arbitrary "themes" used in evaluation subdirectories, so we can start from them, incrementally expand the list of patterns, while roughly maintaining the (loose) alignments to the evaluation themes.

@MrSqually I remember you had at some point a similar idea while working on evaluation automation. Can you post your notes from the previous discussion here as well?