amir9979 / reading_list

my simple reading list
0 stars 0 forks source link

Fwd: Matthew B. A. McDermott - new related research #49

Open fire-bot opened 6 months ago

fire-bot commented 6 months ago

Sent by @amir9979 (amir9979@gmail.com). Created by fire.


---------- Forwarded message ---------
From: Google Scholar Alerts [scholaralerts-noreply@google.com](mailto:scholaralerts-noreply@google.com)
Date: Sun, Mar 10, 2024 at 12:29 AM
Subject: Matthew B. A. McDermott - new related research
To: [amir9979@gmail.com](mailto:amir9979@gmail.com)

__

[PDF] Semantics-enhanced Cross-modal Masked Image Modeling for Vision-Language Pre-training

H Liu, Y Shi, H Xu, C Yuan, Q Ye, C Li, M Yan, J Zhang… - arXiv preprint arXiv …, 2024

In vision-language pre-training (VLP), masked image modeling (MIM) has recently
been introduced for fine-grained cross-modal alignment. However, in most existing
methods, the reconstruction targets for MIM lack high-level semantics, and text is not …

Save Twitter LinkedIn Facebook

[HTML] DP-CRE: Continual Relation Extraction via Decoupled Contrastive Learning and Memory Structure Preservation

M Huang, M Xiao, L Wang, Y Du - arXiv preprint arXiv:2403.02718, 2024

Continuous Relation Extraction (CRE) aims to incrementally learn relation
knowledge from a non-stationary stream of data. Since the introduction of new
relational tasks can overshadow previously learned information, catastrophic …

Save Twitter LinkedIn Facebook

[PDF] TOO-BERT: A Trajectory Order Objective BERT for self-supervised representation learning of temporal healthcare data

A Amirahmadi, F Etminani, J Bjork, O Melander… - 2024

Healthcare data accumulation over time, particularly in Electronic Health Records
(EHRs), plays a pivotal role by offering a vast repository of patient data with the
potential to enhance patient care and predict health outcomes. While Bert-inspired …

Save Twitter LinkedIn Facebook

[PDF] Understanding Missingness in Time-series Electronic Health Records for Individualized Representation

GO Ghosheh, J Li, T Zhu - arXiv preprint arXiv:2402.15730, 2024

With the widespread of machine learning models for healthcare applications, there is
increased interest in building applications for personalized medicine. Despite the
plethora of proposed research for personalized medicine, very few focus on …

Save Twitter LinkedIn Facebook

[HTML] Bidirectional Generative Pre-training for Improving Time Series Representation Learning

Z Song, Q Lu, H Zhu, Y Li - arXiv preprint arXiv:2402.09558, 2024

Learning time-series representations for discriminative tasks has been a long-
standing challenge. Current pre-training methods are limited in either unidirectional
next-token prediction or randomly masked token prediction. We propose a novel …

Save Twitter LinkedIn Facebook

[PDF] Memorize and Rank: Enabling Large Language Models for Medical Event Prediction

MD Ma, Y Xiao, A Cuturrufo, X Wang, W Wang - AAAI 2024 Spring Symposium on …, 2024

Medical event prediction produces patient's potential diseases given their visit
history. It is personalized yet requires an in-depth understanding of domain
knowledge. Existing works integrate clinical knowledge into the prediction with …

Save Twitter LinkedIn Facebook

Radiological Report Generation from Chest X-ray Images Using Pre-trained Word Embeddings

FS Alotaibi, N Kaur - Wireless Personal Communications, 2024

The deep neural networks have facilitated the radiologists to large extent by
automating the process of radiological report generation. Majority of the researchers
have focussed on improving the learning focus of the model using attention …

Save Twitter LinkedIn Facebook

[PDF] Revisiting Knowledge Distillation for Autoregressive Language Models

Q Zhong, L Ding, L Shen, J Liu, B Du, D Tao - arXiv preprint arXiv:2402.11890, 2024

Knowledge distillation (KD) is a common approach to compress a teacher model to
reduce its inference cost and memory footprint, by training a smaller student model.
However, in the context of autoregressive language models (LMs), we empirically …

Save Twitter LinkedIn Facebook

[HTML] ARL2: Aligning Retrievers for Black-box Large Language Models via Self-guided Adaptive Relevance Labeling

L Zhang, Y Yu, K Wang, C Zhang - arXiv preprint arXiv:2402.13542, 2024

Retrieval-augmented generation enhances large language models (LLMs) by
incorporating relevant information from external knowledge sources. This enables
LLMs to adapt to specific domains and mitigate hallucinations in knowledge …

Save Twitter LinkedIn Facebook

[HTML] RAM-EHR: Retrieval Augmentation Meets Clinical Predictions on Electronic Health Records

R Xu, W Shi, Y Yu, Y Zhuang, B Jin, MD Wang, JC Ho… - arXiv preprint arXiv …, 2024

We present RAM-EHR, a Retrieval AugMentation pipeline to improve clinical
predictions on Electronic Health Records (EHRs). RAM-EHR first collects multiple
knowledge sources, converts them into text format, and uses dense retrieval to obtain …

Save Twitter LinkedIn Facebook

This message was sent by Google Scholar because you're following new articles related to research by Matthew B. A. McDermott.

List alerts

Cancel alert

github-actions[bot] commented 6 months ago

[{"title": "Semantics-enhanced Cross-modal Masked Image Modeling for Vision-Language Pre-training", "link": "https://arxiv.org/pdf/2403.00249", "details": "H Liu, Y Shi, H Xu, C Yuan, Q Ye, C Li, M Yan, J Zhang\u2026 - arXiv preprint arXiv \u2026, 2024", "abstract": "In vision-language pre-training (VLP), masked image modeling (MIM) has recently been introduced for fine-grained cross-modal alignment. However, in most existing methods, the reconstruction targets for MIM lack high-level semantics, and text is not \u2026"}, {"title": "DP-CRE: Continual Relation Extraction via Decoupled Contrastive Learning and Memory Structure Preservation", "link": "https://arxiv.org/html/2403.02718v1", "details": "M Huang, M Xiao, L Wang, Y Du - arXiv preprint arXiv:2403.02718, 2024", "abstract": "Continuous Relation Extraction (CRE) aims to incrementally learn relation knowledge from a non-stationary stream of data. Since the introduction of new relational tasks can overshadow previously learned information, catastrophic \u2026"}, {"title": "TOO-BERT: A Trajectory Order Objective BERT for self-supervised representation learning of temporal healthcare data", "link": "https://www.researchsquare.com/article/rs-3959125/latest.pdf", "details": "A Amirahmadi, F Etminani, J Bjork, O Melander\u2026 - 2024", "abstract": "Healthcare data accumulation over time, particularly in Electronic Health Records (EHRs), plays a pivotal role by offering a vast repository of patient data with the potential to enhance patient care and predict health outcomes. While Bert-inspired \u2026"}, {"title": "Understanding Missingness in Time-series Electronic Health Records for Individualized Representation", "link": "https://arxiv.org/pdf/2402.15730", "details": "GO Ghosheh, J Li, T Zhu - arXiv preprint arXiv:2402.15730, 2024", "abstract": "With the widespread of machine learning models for healthcare applications, there is increased interest in building applications for personalized medicine. Despite the plethora of proposed research for personalized medicine, very few focus on \u2026"}, {"title": "Bidirectional Generative Pre-training for Improving Time Series Representation Learning", "link": "https://arxiv.org/html/2402.09558v1", "details": "Z Song, Q Lu, H Zhu, Y Li - arXiv preprint arXiv:2402.09558, 2024", "abstract": "Learning time-series representations for discriminative tasks has been a long- standing challenge. Current pre-training methods are limited in either unidirectional next-token prediction or randomly masked token prediction. We propose a novel \u2026"}, {"title": "Memorize and Rank: Enabling Large Language Models for Medical Event Prediction", "link": "https://openreview.net/pdf%3Fid%3DIQU5NsX7Mj", "details": "MD Ma, Y Xiao, A Cuturrufo, X Wang, W Wang - AAAI 2024 Spring Symposium on \u2026, 2024", "abstract": "Medical event prediction produces patient's potential diseases given their visit history. It is personalized yet requires an in-depth understanding of domain knowledge. Existing works integrate clinical knowledge into the prediction with \u2026"}, {"title": "Radiological Report Generation from Chest X-ray Images Using Pre-trained Word Embeddings", "link": "https://link.springer.com/article/10.1007/s11277-024-10886-x", "details": "FS Alotaibi, N Kaur - Wireless Personal Communications, 2024", "abstract": "The deep neural networks have facilitated the radiologists to large extent by automating the process of radiological report generation. Majority of the researchers have focussed on improving the learning focus of the model using attention \u2026"}, {"title": "Revisiting Knowledge Distillation for Autoregressive Language Models", "link": "https://arxiv.org/pdf/2402.11890", "details": "Q Zhong, L Ding, L Shen, J Liu, B Du, D Tao - arXiv preprint arXiv:2402.11890, 2024", "abstract": "Knowledge distillation (KD) is a common approach to compress a teacher model to reduce its inference cost and memory footprint, by training a smaller student model. However, in the context of autoregressive language models (LMs), we empirically \u2026"}, {"title": "ARL2: Aligning Retrievers for Black-box Large Language Models via Self-guided Adaptive Relevance Labeling", "link": "https://arxiv.org/html/2402.13542v1", "details": "L Zhang, Y Yu, K Wang, C Zhang - arXiv preprint arXiv:2402.13542, 2024", "abstract": "Retrieval-augmented generation enhances large language models (LLMs) by incorporating relevant information from external knowledge sources. This enables LLMs to adapt to specific domains and mitigate hallucinations in knowledge \u2026"}, {"title": "RAM-EHR: Retrieval Augmentation Meets Clinical Predictions on Electronic Health Records", "link": "https://arxiv.org/html/2403.00815v1", "details": "R Xu, W Shi, Y Yu, Y Zhuang, B Jin, MD Wang, JC Ho\u2026 - arXiv preprint arXiv \u2026, 2024", "abstract": "We present RAM-EHR, a Retrieval AugMentation pipeline to improve clinical predictions on Electronic Health Records (EHRs). RAM-EHR first collects multiple knowledge sources, converts them into text format, and uses dense retrieval to obtain \u2026"}]