Closed WhiteMagicRaven closed 1 year ago
I'm not sure of the best way to implement this without causing issues with the current mipmap handling. Graphics isn't really my area of expertise. I'm also not sure this would make enough of a difference compared to existing r_roundimagedown 0 to be worth the trouble.
Have you tried testing this yourself, or have any screenshots showing it would make a noticeable difference on any maps? If you know how to make a working implementation that does noticeably improve graphics I'd be happy to consider a pull request.
Well tried to compile and got too many errors )).
Just for quick reference
if ( SDL_GL_ExtensionSupported( "GL_ARB_texture_non_power_of_two" )
{
scaled_width=width;
// this will disable usage of ResampleTexture function that is not perfectly resizing textures. Video card will support non power of two textures directly.
scaled_height=height;
// that is how i done it before in just plain q3 it been a while ago if i recall correctly.
}
else
{
for (scaled_width = 1 ; scaled_width < width ; scaled_width<<=1)
;
for (scaled_height = 1 ; scaled_height < height ; scaled_height<<=1)
;
}
so basicly any textures from game will be uploaded in untouched state to video card (if r_roudimagedown 0 and r_picmip 0) can't provide screenshots, i remember some source ports uses this extension. This leads to completely abandon usage of ResampleTexture function (if r_roudimagedown 0 and r_picmip 0) making them sharper as video card support them natively, in any case ResampleTexture doing it worse than video card, and it makes blur.
In openg1 renderer this seems to cause some artifacts, possibly because the mipmap generation only handles powers of 2 textures correctly. Here are some examples:
With r_roundImagesDown 1:
With r_roundImagesDown 0:
With ResampleTexture skipped (notice extra pattern on wall in background):
This does seem like it might work on the opengl2 renderer. But the difference seems really small compared to r_roundImagesDown 0, and only for textures that aren't originally powers of 2, so it is probably hardly noticeable in actual gameplay. I'm not sure it's worth the extra complexity of adding a setting for it.
What will happen if you disable mipmap in case when it loads non-power of two textures.
I assume that video card itself will scaled down textures. In older systems it can cause slowdown in FPS, for never no problems should be.
It doesn't seem to look good. Distant images aren't downsized correctly.
can you give me that version for tests please? internaly only
Here's the test branch: https://github.com/Chomenor/ioef-cmod/tree/image_scaling_test
Test build for Windows: image_scaling_test.zip
To enable the regular ResizeTexture skip:
/set test 1
/vid_restart
To also enable the mipmap skip:
/set test 2
/vid_restart
Hello, thanks)
found some commits in q3 http://icculus.org/pipermail/quake3-commits/2012-October/002246.html look for textureNonPowerOfTwo
i didn't remember where i exactly seen that fix but it worked with mipmaps too. Probably mimap code get rewritted.
In other case i spot that if i disable mipmapping and force nVidia control panel to apply 4x supersample it looks sharper than mipmap code in q3 can produce it.
Also give me name please of that map you use in tests.
Found that map)
i find that in uhexen2 source port
GL_MipMap
/ static void GL_MipMap (const byte in, byte out, int width, int height, int destwidth, int destheight) { const byte inrow; int x, y, nextrow;
// if given odd width/height this discards the last row/column
// of pixels, rather than doing a proper box-filter scale down
inrow = in;
nextrow = *width * 4;
if (*width > destwidth)
{
*width >>= 1;
if (*height > destheight)
{
// reduce both
*height >>= 1;
for (y = 0; y < *height; y++, inrow += nextrow * 2)
{
for (in = inrow, x = 0; x < *width; x++)
{
out[0] = (byte) ((in[0] + in[4] + in[nextrow ] + in[nextrow+4]) >> 2);
out[1] = (byte) ((in[1] + in[5] + in[nextrow+1] + in[nextrow+5]) >> 2);
out[2] = (byte) ((in[2] + in[6] + in[nextrow+2] + in[nextrow+6]) >> 2);
out[3] = (byte) ((in[3] + in[7] + in[nextrow+3] + in[nextrow+7]) >> 2);
out += 4;
in += 8;
}
}
}
else
{
// reduce width
for (y = 0; y < *height; y++, inrow += nextrow)
{
for (in = inrow, x = 0; x < *width; x++)
{
out[0] = (byte) ((in[0] + in[4]) >> 1);
out[1] = (byte) ((in[1] + in[5]) >> 1);
out[2] = (byte) ((in[2] + in[6]) >> 1);
out[3] = (byte) ((in[3] + in[7]) >> 1);
out += 4;
in += 8;
}
}
}
}
else
{
if (*height > destheight)
{
// reduce height
*height >>= 1;
for (y = 0; y < *height; y++, inrow += nextrow * 2)
{
for (in = inrow, x = 0; x < *width; x++)
{
out[0] = (byte) ((in[0] + in[nextrow ]) >> 1);
out[1] = (byte) ((in[1] + in[nextrow+1]) >> 1);
out[2] = (byte) ((in[2] + in[nextrow+2]) >> 1);
out[3] = (byte) ((in[3] + in[nextrow+3]) >> 1);
out += 4;
in += 4;
}
}
}
}
}
or another way is let the OpenGL itself generate mip https://stackoverflow.com/questions/59992190/opengl-filtering-parameters-and-mipmap
@WhiteMagicRaven I'm not sure, but aren't you confusing things here? NPOT textures suck, nothing more. Lousy artists will create NPOT textures, any solution to make them look good will eat up resources no matter what engine/renderer you use, but is NPOT textures really causing your problem? Or do you just want sharper textures? If this is what you want, why not change the filtering method? Both renderers currently support anisotropic filtering, afaik. r_textureMode GL_LINEAR_MIPMAP_LINEAR r_ext_texture_filter_anisotropic 1 r_ext_max_anisotropy 16 for example. Please report if this works for you and solves your issue, and apologies if I totally misunderstood your problem ...
Probably interesting for you: https://gamedev.stackexchange.com/questions/7927/should-i-use-textures-not-sized-to-a-power-of-2
all that was done before , yes i aware of that everything. NPOT textures sometimes get used, quake1 and quake2 source ports perfectly utilizes them when they find gl_arb_non_power_of_two_extension.
As i know idSoftware's outdated ResampleTexture and MipMap generators not the best in out modern nowadeys , these function's outdated , produces to much blur.
Can you than please provide screenshots, so we can investigate why your textures aren't sharp? With the mentioned settings (and picmip 0) you really should have very sharp textures. I really can't think of any other problem, especially because you said that the nVidia control panel works as expected (the drivers are okay) ... ... and can you please provide the NPOT texture you are concerned about, I don't even know that there are any in idtech3 games.
opengl2 renderer does seem to use an opengl generator function. That's probably why it is able to handle the non power of 2 textures correctly. https://github.com/Chomenor/ioef-cmod/blob/ed5795460661a56edc9d9d886b901fbdbb369c44/code/renderergl2/tr_image.c#L2017
Supporting non power of 2 textures does seem like it can make a slight improvement in sharpness, as you can tell from the screenshots if you compare the second and third ones closely. But that's a cherry picked scene which isn't typical of most maps and textures on EF, and it still seems like a very small difference that you could hardly notice in an actual game with somebody shooting at you.
Are the textures from screenshot map non power of 2 textures? Only out of interest, what size they are? I'm currently working on renderer2... Just to be sure, we still talk about non power of 2 textures, right? Even textures of e.g.: 128x256, or even 64x512 ARE power of 2 textures, regardless if they are square or not. A texture with a dimension of e.g.: 111x398 is a NPOT. https://www.katsbits.com/tutorials/textures/make-better-textures-correct-size-and-power-of-two.php Hence I ask what texture in EF is a NPOT.
It looks like the main texture (the one on the walkway) is textures/danger/gehweg2.jpg
from dangercity.pk3
, and is size 256x240.
Thanks for reporting! I'll check that dimension, with both renderers! Though, I'm also pretty sure it's not very likely we can improve much here...
So thanks for instructions on how to compile and i must say: for opengl2 adding non power of two texture support works perfectly. Some pseudo code: // // convert to exact power of 2 sizes // if (!mipmap) { scaled_width = width; scaled_height = height; } else { if ( SDL_GL_ExtensionSupported( "GL_ARB_texture_non_power_of_two" ) ) { scaled_width = width; scaled_height = height; //ri.Printf( PRINT_ALL, "...using GL_ARB_texture_non_power_of_two\n" );
}
else
{
scaled_width = NextPowerOfTwo(width);
scaled_height = NextPowerOfTwo(height);
//ri.Printf( PRINT_ALL, "...GL_ARB_texture_non_power_of_two not found\n" );
}
}
for opengl1 is another case, here R_MipMap and R_MipMap2 is not designed in mind to work with NPOTs
so fix for R_MipMap is ok, and for R_MipMap2 i don't know how to maybe followed later, but R_MipMap2 can be disabled when NPOT gl ext is founded.
static void R_MipMap (byte in, int width, int height) { int i, j; byte out,*inrow; int row;
if ( !r_simpleMipMaps->integer ) {
R_MipMap2( (unsigned *)in, width, height );
return;
}
if ( width == 1 && height == 1 ) {
return;
}
inrow = in;
row = width * 4;
out = in;
width >>= 1;
height >>= 1;
if ( width == 0 || height == 0 ) {
width += height; // get largest
for (i=0 ; i<width ; i++, out+=4, in+=8 ) {
out[0] = ( in[0] + in[4] )>>1;
out[1] = ( in[1] + in[5] )>>1;
out[2] = ( in[2] + in[6] )>>1;
out[3] = ( in[3] + in[7] )>>1;
}
return;
}
for (i=0 ; i<height ; i++, inrow +=row* 2) {
for (in = inrow, j=0 ; j<width ; j++, out+=4, in+=8) {
out[0] = (in[0] + in[4] + in[row+0] + in[row+4])>>2;
out[1] = (in[1] + in[5] + in[row+1] + in[row+5])>>2;
out[2] = (in[2] + in[6] + in[row+2] + in[row+6])>>2;
out[3] = (in[3] + in[7] + in[row+3] + in[row+7])>>2;
}
}
}
in conclusion that was perfect for opengl2 and adding cvar like r_ext_texture_non_power_of_two 0 is disable 1 to use it, the R_MipMapsRGB is handles NPOTs
Well done!
Thanks! I haven't tested it very thoroughly, but it does seem to work now.
I added a basic implementation in a3b4c2e7c7d658c48c6b489ade39f429ae0117bf, which can be enabled by setting "r_ext_texture_non_power_of_two" cvar to 1. Let me know if you find any issues or improvements.
This can be archieved if engine will use OpenGL extension GL_ARB_texture_non_power_of_two or similar (in case its not ARB)
This extensions is sign that video card can load textures in any size. In case with this game or any q3 engine it bypass ResampleTexture function that causes blur on textures by scaling them.