Open danielpatrickhug opened 1 year ago
Postprocessing Decoder Output with Sinkhorn Algorithm and Hard Categorization____
│ ├─Node Feature Decoding with Encoders and Decoders____
│ │ ├─■──Position Encoding Function for Natural Language Processing____ ── Topic: 23
│ │ └─Node feature decoding using decoders and edge features____
│ │ ├─■──Creating Encoders with Xavier Initialization and Truncated Normal Distribution for Encoding Categori ── Topic: 33
│ │ └─Node feature decoding with decoders and edge features____
│ │ ├─■──Node feature decoding and encoding with decoders and edge features____ ── Topic: 2
│ │ └─■──Graph diff decoders____ ── Topic: 32
│ └─Postprocessing of decoder output in graph neural networks.____
│ ├─Decoder Output Postprocessing with Sinkhorn Algorithm and Cross-Entropy Loss____
│ │ ├─Message Passing Net with Time-Chunked Data Processing____
│ │ │ ├─■──Python Class for Message Passing Model with Selectable Algorithm____ ── Topic: 26
│ │ │ └─■──NetChunked message passing operation with LSTM states for time-chunked data____ ── Topic: 7
│ │ └─Loss calculation for time-chunked training with scalar truth data.____
│ │ ├─Loss calculation function for time-chunked training with scalar truth data.____
│ │ │ ├─■──Loss calculation for time-chunked training data____ ── Topic: 4
│ │ │ └─■──Logarithmic Sinkhorn Operator for Permutation Pointer Logits____ ── Topic: 10
│ │ └─■──Decoder postprocessing with Sinkhorn operator____ ── Topic: 28
│ └─Gradient Filtering for Optimizer Updates____
│ ├─■──Filtering processor parameters in Haiku models____ ── Topic: 30
│ └─■──Filtering null gradients for untrained parameters during optimization
PGN with Jax implementation and NeurIPS 2020 paper ├─Message-Passing Neural Network (MPNN) for Graph Convolutional Networks (GCNs) │ ├─■──"Applying Triplet Message Passing with HK Transforms in MPNN for Graph Neural Networks" ── Topic: 20 │ └─■──Implementation of Deep Sets (Zaheer et al., NeurIPS 2017) using adjacency matrices and memory networ ── Topic: 13 └─GATv2 Graph Attention Network with adjustable sizes of multi-head attention and residual connections ├─■──Graph Attention Network v2 architecture with adjustable head number and output size. ── Topic: 36 └─■──Processor factory with various models and configurations____ ── Topic: 25