Open wjlgatech opened 2 years ago
WIT should be able to handle all those input types in a single datapoint just fine. Your code might look like some combination of https://colab.sandbox.google.com/github/PAIR-code/what-if-tool/blob/master/WIT_Smile_Detector.ipynb (images), https://colab.sandbox.google.com/github/pair-code/what-if-tool/blob/master/WIT_Toxicity_Text_Model_Comparison.ipynb (text), and https://colab.sandbox.google.com/github/pair-code/what-if-tool/blob/master/WIT_COMPAS_with_SHAP.ipynb (continuous and categorical)
Another option would be to look into using LIT, which is more full-featured and has better multi-modal support, and is being actively developed (https://pair-code.github.io/lit/)
@jameswex Amazing! Thanks for your specific, point to point feedback. I will check out those resource and update to the WIT community what it ends up.
Can What-if-tool handle multimodal structural data which has continuous, categorical, text and image columns?
Could you point me to some examples on how to set up wit for such TF multimodal model?
Thanks!