-
[Strategies for pre-training graph neural networks](https://arxiv.org/abs/1905.12265)
```bib
@article{hu2019strategies,
title={Strategies for pre-training graph neural networks},
author={Hu, W…
-
### Metadata
- Authors: Tobias Domhan and Felix Hieber
- Organization: Amazon
- Conference: EMNLP 2017
- Link: https://goo.gl/eFj9gx
-
##CVPR2018
Institute: CUHK, Syney
URL: https://arxiv.org/pdf/1805.04409.pdf
Author: http://danxu-research.weebly.com/ (CVPR2018 4편인 저자, 포텟터진듯)
Keyword: DepthEstimation, SceneParsing
Interest: 2…
-
### Feature request
In the current version of the transformers library's Trainer class, the model can only report a total loss during the training and evaluation stages. However, in practical applica…
-
### Bug summary
I modified the 'deepmd/train/trainer.py' file, and added some codes for gradient:
print('开始获取梯度信息')
# 计算损失函数相对于输入坐标的梯度
coord_p…
-
Very interesting work!
Is it possible to do for a multi-task learning problem? Say better than multi-task lasso in terms of speed?
-
## Weekly Notebook Entry — Week 4
### Overview
- **Week Span:** `9/9` to `9/15`
### Tasks for This Week
- [ ] Task 1: Give an overview of the models that were previously used (Multivariate Reg…
-
Exploring the concept of autonomous machines, particularly within the context of directions (navigation, decision-making, etc.), involves several technical aspects that combine elements of artificial …
-
Thanks for your awesome work! VisionLLM opens a way towards a generalist vision and language model.
However, from the result in the single task vs. multiple tasks in ablation study, it seems that m…
-
Description
In order to create synthetic data for OCR, we try out the approach of font style transfer using deep learning.
Model will transfer font style onto an image of text given. Now research on v…