jinglescode / papers

Summaries of machine learning papers
MIT License
12 stars 0 forks source link
autoencoder brain-computer-interface computer-vision deep-learning machine-learning medical medical-imaging paper sequential summary vision

Notes on Machine Learning and Medical Research Papers

A collection of research paper summaries, on machine learning and medical (brain computer interface and vision). ML papers are mainly on solving computer vision or sequential problems, and medical papers are focusing on vision problems.

Papers are sorted by topics and tags. Go to the Issues tab to browse, search and filter research papers.

Table of Contents


Machine Learning

Computer vision

Interpretable and Fine-Grained Visual Explanations for Convolutional Neural Networks

Stand-Alone Self-Attention in Vision Models

On the relationship between self-attention and convolutional layers

Dynamic Convolution: Attention over Convolution Kernels

Dynamic Group Convolution for Accelerating Convolutional Neural Networks

An image is worth 16x16 words: Transformers for image recognition at scale

End-to-End Video Instance Segmentation with Transformers

Deep learning-enabled medical computer vision

Bottleneck Transformers for Visual Recognition

Sequential

An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling

Machine translation of cortical activity to text with an encoder–decoder framework

Speech synthesis from neural decoding of spoken sentences

Wavenet: A generative model for raw audio

Conv-tasnet: Surpassing ideal time–frequency magnitude masking for speech separation

Convolutional Sequence to Sequence Learning

Sequence-to-Sequence Speech Recognition with Time-Depth Separable Convolutions

Parallel wavenet: Fast high-fidelity speech synthesis

Tacotron: Towards End-to-End Speech Synthesis

Wave-Tacotron: Spectrogram-free end-to-end text-to-speech synthesis

Location-Relative Attention Mechanisms For Robust Long-Form Speech Synthesis

Pay Less Attention with Lightweight and Dynamic Convolutions

Learning representations from EEG with deep recurrent-convolutional neural networks

wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations

Improved Noisy Student Training for Automatic Speech Recognition

Visual to Sound: Generating Natural Sound for Videos in the Wild

SampleRNN: An unconditional end-to-end neural audio generation model

Generating Visually Aligned Sound from Videos

WaveGrad 2: Iterative Refinement for Text-to-Speech Synthesis

Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions

Sequential: Transformer

Transformer-xl: Attentive language models beyond a fixed-length context

Compressive transformers for long-range sequence modelling

Reformer: The efficient transformer

Music transformer: Generating music with long-term structure

Conformer: Convolution-augmented Transformer for Speech Recognition

Transformer transducer: A streamable speech recognition model with transformer encoders and rnn-t loss

Rethinking Attention with Performers

Linformer: Self-Attention with Linear Complexity

Transformers are rnns: Fast autoregressive transformers with linear attention

An image is worth 16x16 words: Transformers for image recognition at scale

Big bird: Transformers for longer sequences

Long Range Arena : A Benchmark for Efficient Transformers

Earthquake transformer—an attentive deep-learning model for simultaneous earthquake detection and phase picking

O(n) Connections are Expressive Enough: Universal Approximability of Sparse Transformers

Are Transformers universal approximators of sequence-to-sequence functions?

Fast Transformers with Clustered Attention

Transformers with convolutional context for ASR

Exploring Transformers for Large-Scale Speech Recognition

Transformers without Tears: Improving the Normalization of Self-Attention

Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting

Representation learning

Deep Canonical Correlation Analysis

Medical

Brain computer interface

Learning across multi-stimulus enhances target recognition methods in SSVEP-based BCIs

Deep Learning-based Classification for Brain-Computer Interfaces

Learning representations from EEG with deep recurrent-convolutional neural networks

Retinotopic and topographic analyses with gaze restriction for steady-state visual evoked potentials

Steady-state visually evoked potentials: Focus on essential paradigms and future perspectives

Filter bank canonical correlation analysis for implementing a high-speed SSVEP-based brain–computer interface

Methods of EEG Signal Features Extraction Using Linear Analysis in Frequency and Time-Frequency Domains

MI-EEGNET: A novel Convolutional Neural Network for motor imagery classification

A Radial Zoom Motion-Based Paradigm for Steady State Motion Visual Evoked Potentials)

Selective attention to stimulus location modulates the steady-state visual evoked potential

Four Novel Motion Paradigms Based on Steady-state Motion Visual Evoked Potential

Highly Interactive Brain–Computer Interface Based on Flicker-Free Steady-State Motion Visual Evoked Potential

Comparison of Modern Highly Interactive Flicker-Free Steady State Motion Visual Evoked Potentials for Practical Brain–Computer Interfaces

A new dual-frequency stimulation method to increase the number of visual stimuli for multi-class SSVEP-based brain–computer interface

Electrophysiological correlates of gist perception: a steady-state visually evoked potentials study

Perception of illusory contours forms intermodulation responses of steady state visual evoked potentials as a neural signature of spatial integration

From intermodulation components to visual perception and cognition-a review

Frequency recognition based on canonical correlation analysis for SSVEP-based BCIs

Computational modeling and application of steady-state visual evoked potentials in brain-computer interfaces

Spatial Filtering in SSVEP-Based BCIs: Unified Framework and New Improvements

Spatial Filtering Based on Canonical Correlation Analysis for Classification of Evoked or Event-Related Potentials in EEG Data

SSVEP enhancement based on Canonical Correlation Analysis to improve BCI performances

Multiway Canonical Correlation Analysis of Brain Signals

Spatial smoothing of canonical correlation analysis for steady state visual evoked potential based brain computer interfaces

Learning across multi-stimulus enhances target recognition methods in SSVEP-based BCIs

An amplitude-modulated visual stimulation for reducing eye fatigue in SSVEP-based brain-computer interfaces

Visual evoked potential and psychophysical contrast thresholds in glaucoma

Contrast sensitivity and visual disability in chronic simple glaucoma

Insights for mfVEPs from perimetry using large spatial frequency-doubling and near frequency-doubling stimuli in glaucoma

Multifocal frequency-doubling pattern visual evoked responses to dichoptic stimulation

Vision

A comparison of covert and overt attention as a control option in a steady-state visual evoked potential-based brain computer interface

Neural Differences between Covert and Overt Attention Studied using EEG with Simultaneous Remote Eye Tracking

Visual field testing for glaucoma – a practical guide

Walking enhances peripheral visual processing in humans

The steady-state visual evoked potential in vision research: A review

Multifocal Visual Evoked Potential (mfVEP) and Pattern-Reversal Visual Evoked Potential Changes in Patients with Visual Pathway Disorders: A Case Series

Study for Analysis of the Multifocal Visual Evoked Potential

Multifocal visual evoked potentials for quantifying optic nerve dysfunction in patients with optic disc drusen

Steady-state multifocal visual evoked potential (ssmfVEP) using dartboard stimulation as a possible tool for objective visual field assessment

A Review of Deep Learning for Screening, Diagnosis, and Detection of Glaucoma Progression

Objective visual field determination in forensic ophthalmology with an optimized 4-channel multifocal VEP perimetry system: a case report of a patient with retinitis pigmentosa

An oblique effect in parafoveal motion perception

Choice of Grating Orientation for Evaluation of Peripheral Vision

Motion Perception in the Peripheral Visual Field

Development of Grating Acuity and Contrast Sensitivity in the Central and Peripheral Visual Field of the Human Infant

Motion perception in the peripheral visual field

Speed of visual processing increases with eccentricity

Stimulus dependencies of an illusory motion: Investigations of the Motion Bridging Effect

Ehud Kaplan on Receptive fields