S Kim, D Ahn, BC Ko, I Jang, KJ Kim - arXiv preprint arXiv:2409.14630, 2024
The demand for reliable AI systems has intensified the need for interpretable deep
neural networks. Concept bottleneck models (CBMs) have gained attention as an
effective approach by leveraging human-understandable concepts to enhance …
M Tuononen, D Korpi, V Hautamäki - arXiv preprint arXiv:2409.16768, 2024
We propose a novel method for interpreting neural networks, focusing on
convolutional neural network-based receiver model. The method identifies which unit
or units of the model contain most (or least) information about the channel parameter …
K Su, Q Wu, P Cai, X Zhu, X Lu, Z Wang, K Hu - arXiv preprint arXiv:2409.00353, 2024
Masked point modeling methods have recently achieved great success in self-
supervised learning for point cloud data. However, these methods are sensitive to
rotations and often exhibit sharp performance drops when encountering rotational …
JY Araz, M Spannowsky - arXiv preprint arXiv:2409.04519, 2024
The performance of Quantum Autoencoders (QAEs) in anomaly detection tasks is
critically dependent on the choice of data embedding and ansatz design. This study
explores the effects of three data embedding techniques, data re-uploading, parallel …
D Baimukashev, G Alcan, V Kyrki, KS Luck - 8th Annual Conference on Robot Learning
In complex real-world tasks such as robotic manipulation and autonomous driving,
collecting expert demonstrations is often more straightforward than specifying
precise learning objectives and task descriptions. Learning from expert data can be …
B Guo, S He, M Shi, K Yu, J Chen, X Shen - IEEE Internet of Things Journal, 2024
Differentiable ARchitecture Search (DARTS) is a prevailing direction in automatic
machine learning, but it may suffer from performance collapse and generalization
issues. Recent efforts mitigate them by integrating regularization into architectural …
Y Su, JJ Li, M Lease - The 7th BlackboxNLP Workshop-ARR Submissions
Can we preserve the accuracy of neural models while also providing faithful
explanations of model decisions with respect to training data? We propose “wrapper
boxes”: training a neual model as usual and then using its learned feature …
In our main paper, we have presented GRACE, a novel approach addressing biases
in knowledge-based VQA. It surpasses current debiasing methods, which primarily
tackle dataset biases but fall short in handling biases within the incontext learning …
Video recognition models often learn scene-biased action representation due to the
spurious correlation between actions and scenes in the training data. Such models
show poor performance when the test data consists of videos with unseen action …
This message was sent by Google Scholar because you're following new articles related to research by Julia E Vogt.
Sent by Google Scholar Alerts (scholaralerts-noreply@google.com). Created by fire.
[PDF] EQ-CBM: A Probabilistic Concept Bottleneck with Energy-based Models and Quantized Vectors
S Kim, D Ahn, BC Ko, I Jang, KJ Kim - arXiv preprint arXiv:2409.14630, 2024
The demand for reliable AI systems has intensified the need for interpretable deep
neural networks. Concept bottleneck models (CBMs) have gained attention as an
effective approach by leveraging human-understandable concepts to enhance …
[PDF] Interpreting Deep Neural Network-Based Receiver Under Varying Signal-To-Noise Ratios
M Tuononen, D Korpi, V Hautamäki - arXiv preprint arXiv:2409.16768, 2024
We propose a novel method for interpreting neural networks, focusing on
convolutional neural network-based receiver model. The method identifies which unit
or units of the model contain most (or least) information about the channel parameter …
[PDF] RI-MAE: Rotation-Invariant Masked AutoEncoders for Self-Supervised Point Cloud Representation Learning
K Su, Q Wu, P Cai, X Zhu, X Lu, Z Wang, K Hu - arXiv preprint arXiv:2409.00353, 2024
Masked point modeling methods have recently achieved great success in self-
supervised learning for point cloud data. However, these methods are sensitive to
rotations and often exhibit sharp performance drops when encountering rotational …
[PDF] The role of data embedding in quantum autoencoders for improved anomaly detection
JY Araz, M Spannowsky - arXiv preprint arXiv:2409.04519, 2024
The performance of Quantum Autoencoders (QAEs) in anomaly detection tasks is
critically dependent on the choice of data embedding and ansatz design. This study
explores the effects of three data embedding techniques, data re-uploading, parallel …
[PDF] Learning Interpretable Reward Models via Unsupervised Feature Selection
D Baimukashev, G Alcan, V Kyrki, KS Luck - 8th Annual Conference on Robot Learning
In complex real-world tasks such as robotic manipulation and autonomous driving,
collecting expert demonstrations is often more straightforward than specifying
precise learning objectives and task descriptions. Learning from expert data can be …
[PDF] Supplementary material of: ViC-MAE: Self-Supervised Representation Learning from Images and Video with Contrastive Masked Autoencoders
J Hernandez, R Villegas, V Ordonez
V [N, T, C, H, W]-minibatch (T= 1 for images)# tau: temperature coefficient
clambda: contrastive coefficient for V in loader:# Distant sampling f_i, f_j=
random_sampling (V)# Patch embeddings and position encodings x_i …
Semantic-DARTS: Elevating Semantic Learning for Mobile Differentiable Architecture Search
B Guo, S He, M Shi, K Yu, J Chen, X Shen - IEEE Internet of Things Journal, 2024
Differentiable ARchitecture Search (DARTS) is a prevailing direction in automatic
machine learning, but it may suffer from performance collapse and generalization
issues. Recent efforts mitigate them by integrating regularization into architectural …
[PDF] Interpretable by Design: Wrapper Boxes Combine Neural Performance with Faithful Attribution of Model Decisions to Training Data
Y Su, JJ Li, M Lease - The 7th BlackboxNLP Workshop-ARR Submissions
Can we preserve the accuracy of neural models while also providing faithful
explanations of model decisions with respect to training data? We propose “wrapper
boxes”: training a neual model as usual and then using its learned feature …
[PDF] GRACE: Graph-Based Contextual Debiasing for Fair Visual Question Answering (Supplementary Materials)
Y Zhang, M Jiang, Q Zhao
In our main paper, we have presented GRACE, a novel approach addressing biases
in knowledge-based VQA. It surpasses current debiasing methods, which primarily
tackle dataset biases but fall short in handling biases within the incontext learning …
[PDF] DEVIAS: Learning Disentangled Video Representations of Action and Scene
K Bae, G Ahn, Y Kim, J Choi
Video recognition models often learn scene-biased action representation due to the
spurious correlation between actions and scenes in the training data. Such models
show poor performance when the test data consists of videos with unseen action …
This message was sent by Google Scholar because you're following new articles related to research by Julia E Vogt.
List alerts
Cancel alert