mit-han-lab / streaming-llm

[ICLR 2024] Efficient Streaming Language Models with Attention Sinks
https://arxiv.org/abs/2309.17453
MIT License
6.59k stars 361 forks source link

wrong #22

Closed QingChengLineOne closed 12 months ago

QingChengLineOne commented 1 year ago

image

Guangxuan-Xiao commented 12 months ago

Hi, could you provide your PyTorch version?

QingChengLineOne commented 12 months ago

oh,thank you. I have sloved this problem. I change my pytorch version and run success.

---Original--- From: "Guangxuan @.> Date: Thu, Oct 12, 2023 05:58 AM To: @.>; Cc: @.**@.>; Subject: Re: [mit-han-lab/streaming-llm] wrong (Issue #22)

Hi, could you provide your PyTorch version?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

QingChengLineOne commented 12 months ago

ok, I will do it

---Original--- From: "Guangxuan @.> Date: Thu, Oct 12, 2023 09:47 AM To: @.>; Cc: @.**@.>; Subject: Re: [mit-han-lab/streaming-llm] wrong (Issue #22)

Closed #22 as completed.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>