Closed bsturk closed 4 months ago
Hmm, yes the current binary buffer detection is not too sophisticated (the correct way would be to lookup magic bytes but that's a bit too computationally expensive and requires file streaming for it to be feasible)
As for your issue I believe your files aren't utf8 encoded which trips up hex.nvim into thinking it's binary.
You could disable the utf8 check by setting the following in the configs
is_file_binary_post_read = function() return false end
Hi, thanks for that!
Here's what I have now, but unfortunately, it is still happening (commenting it out I can open the file as text, but with all of the below it opens up w/ xxd etc)
{
'RaafatTurki/hex.nvim',
config = true,
is_file_binary_pre_read = function() return false end,
is_file_binary_post_read = function() return false end
},
I've attached a file which exhibits the issue for me. rebound.txt
Ahaa, as I suspected it's not utf8.
You seem to be using lazy to manage your plugins, you should put the options in an opts
table like so:
you only need the
post_read
hook disabled since that's the one that does the utf8 check
{
'RaafatTurki/hex.nvim',
config = true,
opts = {
is_file_binary_post_read = function() return false end
}
},
I've tried it on your file and it works.
I edit a lot of code for vintage computers, and sometimes there are character codes in the files that trigger this plugin to assume it is a binary.
I've tried the normal ways of trying to override the behavior (setting nobinary, etc in an au), but it keeps getting filtered. Is there an option to say for a particular filetype to ignore?