BrechtDeMan / WebAudioEvaluationTool

A tool based on the HTML5 Web Audio API to perform perceptual audio evaluation tests locally or on remote machines over the web.
https://code.soundsoftware.ac.uk/projects/webaudioevaluationtool
GNU General Public License v3.0
114 stars 37 forks source link

Analysis page plots audioelements with same name differently #57

Open BrechtDeMan opened 8 years ago

BrechtDeMan commented 8 years ago

Consider the situation where audioelements occur on several pages, e.g. where pages correspond to different songs and mixers A through G have made mixes of each song; or where pages correspond to different types of sound effects and markers A through F correspond to different synthesis models.

In the analysis page, you would then expect to be able to see the average rating of each audioelement/mixer/model 'name'. Of course you also want to see them 'per page', but in the graph as attached, ratings for the same name should be joined. Alternatively it should at least be made clear which bar corresponds to which page, too, though that is essentially already generated by the 'per page' plot.

As it is, it would seem different id's (and all id's are different by definition) are plotted separately, as opposed to different 'names'.

screen shot 2016-05-04 at 12 23 11
nickjillings commented 8 years ago

Strange as it is certainly naming them using name (otherwise the axis would all be different texts). The server side PHP should group these together automatically.

nickjillings commented 8 years ago

The server side groups items based on their page ranking. So if two elements have the same name tag in a page they are grouped together, but this does not span pages.

BrechtDeMan commented 8 years ago

I would argue it should span pages for the 'general' plot, and then have the option (already available) to make per-page plots.

djmoffat commented 8 years ago

I disagree.

I think we should have plots per page, nothing more.

I thought we agreed that getting some basic analysis visualisations was useful, but it was certainly not priority to develop any further visualisations.


Dave Moffat Phone: +44 7969 375 889 Email: me@davemoffat.com

On 6 May 2016, at 09:43, Brecht De Man notifications@github.com wrote:

I would argue it should span pages for the 'general' plot, and then have the option (already available) to make per-page plots.

— You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub

BrechtDeMan commented 8 years ago

Absolutely, but I'm not requesting an extra visualisation, just to fix one of the three visualisations already present: 'Charts per test', 'Charts per test page', 'Charts per element'. Ideally we'd be able to offer a minimum of functionality that answers a maximum of potential users' needs (and surveying papers with listening tests it would seem this does a good job at covering the most frequent evaluations).

Also, as @nickjillings said before, bugs should be logged regardless of their priority. This may not be a 'v1.2.0 milestone' bug but it's still worth tracking.

djmoffat commented 8 years ago

I would suggest it is a feature request, not a bug.

We do not have a priority scale on the GitHub, other than. Urgent / non urgent. Should we develop this?


Dave Moffat Phone: +44 7969 375 889 Email: me@davemoffat.com

On 6 May 2016, at 10:00, Brecht De Man notifications@github.com wrote:

Absolutely, but I'm not requesting an extra visualisation, just to fix one of the three visualisations already present: 'Charts per test', 'Charts per test page', 'Charts per element'. Ideally we'd be able to offer a minimum of functionality that answers a maximum of potential users' needs (and surveying papers with listening tests it would seem this does a good job at covering the most frequent evaluations).

Also, as @nickjillings said before, bugs should be logged regardless of their priority. This may not be a 'v1.2.0 milestone' bug but it's still worth tracking.

— You are receiving this because you commented. Reply to this email directly or view it on GitHub

djmoffat commented 8 years ago

I believe the visualisation we offer already surpass minimum functionality and I can find very few of these papers you reference.


Dave Moffat Phone: +44 7969 375 889 Email: me@davemoffat.com

On 6 May 2016, at 10:05, "me@davemoffat.com" me@davemoffat.com wrote:

I would suggest it is a feature request, not a bug.

We do not have a priority scale on the GitHub, other than. Urgent / non urgent. Should we develop this?


Dave Moffat Phone: +44 7969 375 889 Email: me@davemoffat.com

On 6 May 2016, at 10:00, Brecht De Man notifications@github.com wrote:

Absolutely, but I'm not requesting an extra visualisation, just to fix one of the three visualisations already present: 'Charts per test', 'Charts per test page', 'Charts per element'. Ideally we'd be able to offer a minimum of functionality that answers a maximum of potential users' needs (and surveying papers with listening tests it would seem this does a good job at covering the most frequent evaluations).

Also, as @nickjillings said before, bugs should be logged regardless of their priority. This may not be a 'v1.2.0 milestone' bug but it's still worth tracking.

— You are receiving this because you commented. Reply to this email directly or view it on GitHub

BrechtDeMan commented 8 years ago

I think most typical listening tests report an average rating of each 'type' of stimulus over different pages and/or the average rating of each stimulus per page, which is basically what we're looking to offer here. I think a more individual overview (like a histogram per stimulus) is also very useful from a diagnostics standpoint, e.g. is there any flaw in the design after a short pilot study.

That said I'm not saying I disagree with you and there is no right or wrong I think. The design philosophy should be to aim for a tools that's simple (at least on the outside), intuitive, multi-purpose, powerful, and balancing all these aspects. So as long as an extra feature is not in the way (I think it wouldn't be in the analysis page where everything if fairly self-explanatory) I wouldn't mind adding an extra kind of plot, within reason.

With regards to priority, we only have the 'milestones', and anything that's under '1.2.0' is more urgent than anything else. There are also the bugs vs enhancements, where the latter is usually less important as it is not a problem with existing functionality but an addition to it. If we need more labels we can always create them of course.

nickjillings commented 8 years ago

This is a bug as the server side scripts are supposed to collapse tags with a name together (that is why there is a distinction in the XML between the unique ID and shareable name attributes).

Whilst it most certainly is not a high priority (hence not given to the v1.2.0 milestone) it is certainly a bug as we say we can do this, but currently we don't due to a bug.

It should be possible to do both types of graph (all elements separated or elements grouped and labelled using their ID).

djmoffat commented 8 years ago

Ok @nickjillings

nickjillings commented 7 years ago

Moving this out of the v1.2.1 milestone. Analysis is not well maintained and needs a look at (after 1.2.2 - test creator).

othiman commented 6 years ago

Hi, I am creating a listening test for my PhD thesis right now and first of all this tool is really great and helpful. I stumbled over this bug myself while creating a MUSHRA test. I process a few stimuli taken from the TIMIT database and process them with different algorithms. I put each stimuIus to a page and add each algorithm as a fragment. The algorithm name is then set in the according name tag for each fragment. I would suspect as @BrechtDeMan that the name tags are joined together which would be really helpful to get a quick overview. These kind of test are often used at our institute so this would be great feature in my opinion.

nickjillings commented 6 years ago

Yes you are correct the "name" tag should be used as a grouping object. This is mostly a different issue as the online visualization tools have long been ignored due to a lack of interest or willingness in wanting them, especially for private tests!

The python scripts "should" group by name, if they don't then that's a different bug to this one, so please raise a new issue then!

othiman commented 6 years ago

Is there a Python script that is supposed to group the results by name? All I found are scripts that do this page by page.

nickjillings commented 6 years ago

At some point they did, but it gets complicated as you end up with a multi-dimensional CSV (as in some rows longer than others), so I think it has been removed.

Perhaps it could be added as an option group with the knowledge that you will get this kind of file.

But if needed, then another request should be opened.

othiman commented 6 years ago

I challenged my Python skills and hacked it in the score_parser and score_plot scripts. It does not look nice, but it works for me. ;-)