Open 01acd opened 11 months ago
We follow the official tensorflow code, and use the attention at models/attention.py
.
Thank you very much for your answer. I have another question. Is the autoencoder AE or variational autoencoder VAE used in this code? Looking forward to your reply
俄的世界°简单点就好 @.***
------------------ 原始邮件 ------------------ 发件人: "LiuLei95/PyTorch-Learned-Image-Compression-with-GMM-and-Attention" @.>; 发送时间: 2023年11月30日(星期四) 下午4:58 @.>; @.**@.>; 主题: Re: [LiuLei95/PyTorch-Learned-Image-Compression-with-GMM-and-Attention] 你这个代码没有用注意力机制啊 (Issue #14)
We follow the official tensorflow code, and use the attention at models/attention.py.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
你这个代码没有用注意力机制啊