-
#### Description
This issue popped up during poster discussions at ismir2017 (tagging @vrangasayee): librosa's support for non-western (eg, Indian) scales is currently non-existent, but it would be e…
-
# Full Project
LIT is a platform for visualizing and understanding AI/ML models. The complexity of LIT's interface and the models that it supports can impose barriers and obstacles to adoption.
…
-
Thank you very much for your reply and help. How to visualize salience maps ? I noticed that the human attention in the program is a 512 dimensional feature. Can you give me the original features pro…
-
I have to hunt for clues too much to tell that my workspace is stopped. The explicit status looks just like all the other text, so I'm otherwise left to infer based on the presence of the "Start" but…
-
Thank you for sharing. I am interesting about this work. While I use the YOLOv3 https://github.com/ultralytics/yolov3/ as the object detector.
I can generate the salience map when the image only ha…
-
### Proposal Summary
add a way to see salience maps as overlay or next to images that were selected for inspection.
### Motivation
usually it's not as obvious as "dogs being tagged as cats" -…
-
Add support for identifying a subset of the input tokens that is most influential on the sentiment prediction (a salience map, rationale or explanation).
- [x] Check bertology survey for list of me…
-
Hi!
I would like to save and export the salience maps for all of the sentences at once, rather than one click at a time. Is there a convenient way to do so? Thx.
-
I am strongly in favor of introducing `:Affordance` in the core vocabulary.
As a rationale the word _affordance_ is already used in the definitions of `:Artifact`, `:Signifier`, and `:exposesSignif…
-
## Summary
Hi everyone,
I'm trying to run an IBMA meta-analysis and, when I run my script, I observe that the statical map doesn't fit with the MNI template on the upper and lower region.
→ *…