sergioburdisso / pyss3

A Python package implementing a new interpretable machine learning model for text classification (with visualization tools for Explainable AI :octocat:)
https://pyss3.readthedocs.io
MIT License
336 stars 44 forks source link

Multilabel Live Test #9

Closed oterrier closed 4 years ago

oterrier commented 4 years ago

Hey @sergioburdisso,

I've noticed taht you fixed recently the Multilabel fit issue #6 But the Live_Test.run(clf, X_test, y_test) still does not accept y_test as List[List[str]] It would be really great to have it If you don't have time maybe I could submit a PR? Olivier

sergioburdisso commented 4 years ago

Hi @oterrier!

Thanks for creating this issue. Yes, Issue #6 added the possibility to load multi-label datasets from disk using two common file structures. Now I'm currently working on Issue #5, which, once finished, will enable PySS3 to provide full support for multi-label classification. I've just finished working on Evaluation.test() which now supports multilabel classification (0a897dd) :blush:

Yes, it would be great to have the feature you're suggesting. Besides, I think that Live_Test.run() should also work with no y_test at all, sometimes you just want to test some documents manually, for instance:

docs = ["this is document 1", "this is document 2", "this is document 3"]
Live_Test.run(clf, docs)

It would also be a nice feature to have, what do you think?

In the case of multi-label classification, The only thing that I'm not entirely sure about yet is how to show the list of documents in the Live Test. So far, since each document belongs to only one category, documents are grouped by categories in the left panel, as shown in the following image:

Also, the % of hits (recall) and the "misclassified" icon (!) should be removed or replaced by something else when loading documents with multiple labels.

I think one possible solution is, divided into the following, from easiest to more complex, steps:

Step 1

When loading multilabel documents, remove everything from the left panel, and only show the plain list of all documents (no categories, no % of hits, no misclassification icon). Once a document is selected, and once classified, show the list of true category labels along with its name, in the location that is marked below:

There, we could even paint in green correctly predicted labels and in red misclassified ones.

Step 2

For each document, add the % of the total labels correctly classified. For instance, if we have the following case:

x_test = ["this is document 1", "this is document 2", "this is document 3"]
y_test = [["labelA"],
          ["labelB"],
          ["labelA", "labelB"]]

And the predicted labels are:

y_pred = [["labelB"],  # misclassified
          ["labelB"],  # hit!
          ["labelA"]]  # 50% hit!

In the Live Test we could show the list with these 3 documents along with the % of label hits (recall), as follows:

[Live Test left Panel]
doc1  (0%)
doc2  (100%)
doc3  (50%)

Step 3

When the user moves the cursor over the %, the actual list of true labels is shown along with the predicted ones. Maybe we could use a Tooltip to accomplish this.

Step 4

Add a filter option at the top of the panel, allowing the user to filter the list of documents by category labels or even other options, like %0 of hits, for instance.


What do you think about this? It is not necessary to implement all these steps; step 1 alone will be enough to enable the user to load multilabel documents. Further steps only improve user experience.

Once I finish working on Issue #5 I'll start working on this one, however, feel free to submit any PR, since any kind of help would give me a "head start" and I'll greatly appreciate it :sunglasses:

sergioburdisso commented 4 years ago

Hi again @oterrier! I've just finished implementing the multi-label support for the Live Test Tool (up to step 2 included). As described above, now, in the left panel, the list of documents is shown with no categories and when selected, the true labels are shown along with the predicted labels, as shown below:

Step two was also implemented, and each test document in the list is shown with a % corresponding to its label-based accuracy (aka hamming score). Besides, misclassified labels are shown colored in red, as "drama" below:


Let me know Oliver if this is what you needed/wanted, or if you find something that needs to be improved/changed/fixed. I'm performing the final checks before releasing the new version, which will include all the changes and improvements regarding multi-label classification :smile:

oterrier commented 4 years ago

Wow! It looks good I can't wait to play with it! I pull the development version and I tell you soon

Thx OLivier