Closed JulianGro closed 1 year ago
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
:angry:
Mm, it sounds good, trouble is it's probably a fair amount of work to get done. Volunteers are welcome :)
I think we should switch to basis
I think we should switch to basis
So we can in theory use different texture compression depending on the platform? I assume you mean this library https://github.com/BinomialLLC/basis_universal ?
Maybe I will find time to do a similar evaluation as I did with astc-encoder here.
yeah, I think it would let us compress once (so we’d finally have a single file), and the deencoder would do the best thing for whatever platform we’re on, and it’s all open source if I remember correctly? we looked into it when it came out but didn’t have the time to switch to it. I think the benchmarks were super promising at the time
Would we actually have a single file? Like wouldn't we still need a lower resolution to not waste a ton of space and bandwidth on lower end devices like phones?
hm that’s true, the max texture size is lower on android…that’s unfortunate since we’d like to reduce the number of redirections and downloads, and baking time. I wonder…could we bake the higher resolution, but on android, simply ignore the largest mip when we download it?
I wonder…could we bake the higher resolution, but on android, simply ignore the largest mip when we download it?
Not sure what you mean by that.
In terms of baking, downloads, and redirects: Ideally we would have different packages for Interface to choose from. Like trashcan_8192x8192.zip
and trashcan_1024x1024.zip
. The baking time shouldn't be a problem at all since we don't need it to bake fast at all, and the lower the resolution gets the lower the baking time. Only problem could be filesize on the server, but lower resolution textures obviously use a lot less space anyways.
Being able to download lower resolution assets would also be used on desktop machines. If you are on a system with low video memory, you might as well download the low resolution version to save traffic and save the CPU to rebake the texture.
when we bake assets, we generate mip maps for them, which are used for the progressive downloading and when you run out of video memory. the largest mips are downloaded last. theoretically, the 1024x1024 mip of a 8192x8192 texture is equivalent to the same image downscaled to 1024x1024 and then baked. so, we could download the same file, and just stop early?
Oh I didn't know that we did that. That would make packaging a bit harder since we would need to add some sort of marker into the archive for Interface to know how much of the archive it needs to download.
if I remember correctly, the mips are downloaded in order, and when a client receives a mip, it knows its size, so it should be able to just stop requesting more?
tbh I feel dumb for not having thought of this before so maybe I’m missing something
Hello! Is this still an issue?
Hello! Is this still an issue?
Hello! Is this still an issue?
Currently we are using nvtt on desktop and etc2comp on android devices. For nvtt we are using an outdated patched version and upstream has stopped maintenance. It doesn't seem like many people still use nvtt since its packages have been outdated on Debian and Ubuntu for years and no one has stepped up to maintain it after the original maintainer stopped. For etc2comp maintenance has apparently been stopped ~April 2017 without a word by Google. Debian and Ubuntu don't contain any packages for etc2comp as well.
Problem with both of these encoders is that the stopped maintenance and missing popularity will leave us with supporting them ourselves. This means that we will likely not be able to port to newer platforms like Apple M1, Windows arm, or Linux RISC-V. Not to mention that packaging is a pain if we are using an outdated patched version of a library that already has a package.
I am suggesting to switch to astc-encoder as it is extremely performant, has a very high visual fidelity, seems very popular, seems to be maintained very well by ARMs software division and has already made it into distributions like Debian Bullseye.
Personally I cannot fact check what the oven is doing, since material baking is apparently broken on my system. I did a little comparison between atsc-encoder and nvtt though. I took Alezias zoneDayAmbient.jpg and compressed it with what I think is (or comes closest) to what the oven is using. Original astc-encoder 5x5 medium (took 0.2 seconds) astc-encoder 5x5 thorough (took 2.8 seconds) nvtt BC1 production (took 1.1 seconds)
To me astc-encoder 5x5 medium looks slightly better than nvtt BC1 production while being a good amount faster. astc-encoder 5x5 thorough looking by far the closest to the source image. I chose 5x5 on astc-encoder as it came closest to the bits per pixel that nvtt BC1 outputted (astc-encoder 5x5 still using slightly less bits in this case). Because of the higher visual fidelity we could also compress textures stronger than we did with nvtt, lowering disk space, download size and video memory usage.
Rather than switching to astc-encoder directly, it might also be interesting to enable rendering of astc compressed ktx textures. That way people can use the astc-encoder for Vircadia manually.