issues
search
google-deepmind
/
language_modeling_is_compression
Apache License 2.0
101
stars
14
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
why chunked compression rate and how to interpret It?
#18
MatteoMichetti01
closed
1 month ago
11
How does the arithmetic coder handle a top_k miss for Llama?
#17
francisrmatt
closed
1 month ago
1
use_slow_lossless_compression description
#16
francisrmatt
closed
2 months ago
1
Request for Training Steps and Transformer Configurations
#15
vonpetersenn
closed
3 months ago
1
Request for implementation of Llama2 compression
#14
richardChenzhihui
closed
5 months ago
1
Llama2 tokenisation and implementation
#13
francisrmatt
closed
1 month ago
2
Question about the conversion into byte
#12
adnan1306
closed
6 months ago
2
Applying tokenizers before compression
#11
hui-po-wang
closed
5 months ago
3
Questions regarding the mask_fn
#10
hui-po-wang
closed
9 months ago
3
Egregious analysis errors in the generative methodology
#9
MarcioPais
closed
9 months ago
2
Paper now has Llama1-7b results, will you update the code?
#8
atiorh
closed
5 months ago
4
is it possible to release your params.npz used in the paper?
#7
zhu-zaifei
closed
11 months ago
1
how to reproduce with a pre-trained llm?
#6
zhu-zaifei
closed
11 months ago
1
log-loss vs cross-entropy
#5
atiorh
closed
12 months ago
1
Confirm the chunked compression rate
#4
machao23
closed
1 year ago
1
how to compress a text with a language model?
#3
ArlanCooper
closed
12 months ago
2
Compress is failed due to zero size array
#2
machao23
closed
1 year ago
1
What's the hardware requirement?
#1
machao23
closed
1 year ago
1