Closed EvilSocket1 closed 10 years ago
@Kiwuser you are posting the "header" of the email you are replying to and this reveals your email address, maybe this is not what you intended! Maybe you visit the github page and edit your post.
I honestly don't care, I barely use this email anymore, and since my school blocks GitHub, this is my only way to send messages, I actually don't know what it looks like, can you possibly take a screenshot so I can see what you mean?
It's fine. But that is an interesting theory, is there any way to prove it?
My phone won't let me fix it, wait an hour until I have lunch, I'll use hotspot and fix it
All done thanks to sneakily turning on hotspot in the middle of class :)
This is interesting.
I decided to test this out by putting in some code to test the width and height of the texture that is bound at the moment before you draw the rectangle mesh.
Its reporting that the texture is only 16 x 16 pixels, to spite the fact that it should be bigger.
This is the code that I'm using to test the width and height
GLint width[1]; GLint height[1]; glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, width); glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_HEIGHT, height); printf("Width: %d Height: %d \n", width[0], height[0]);
I'm reading 800 and 600
So when you have your code like this, your getting 800 by 600?
m_defaultShader->Bind(); m_defaultShader->UpdateUniforms(g_transform, *g_material, this); GLint width[1]; GLint height[1]; glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, width); glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_HEIGHT, height); printf("Width: %d Height: %d \n", width[0], height[0]); g_mesh->Draw();
I do read 16 there but inside the TextureData::initrender function I get 800 and 600. But even if it was reading just 16 by 16 pixels we would see more than just one color on the screen. Update: I set the color of the directionalLight to be green and set the intensity really high. The result was the render to texture area being all green. This proves that the render to texture functionality is working, but it seems like the Framebuffer is a very small resolution or something? I don't really know. Someone with more knowledge of OpenGL should look into this issue more. Look at https://github.com/BKcore/JungleIN/blob/master/trunk/src/render/framebuffer.cpp It seems like it's very similar to what Benny has and it may solve some things. I'll look into it tomorrow when I get a chance.
Oooh... Your right, that does look like it could prove to be helpful. But much like you, I can't do it tonight either.
Sigh... I wanna get back to working on my other project, but this problem has had me going and circles and keeps leaving me rather exhausted. And my seemingly chronic sinus headaches aren't exactly helping on that front.
Okay, I tested on an AMD 6370M and it works just fine. So it probably is something that only works on AMD.
Interesting update. I think its sort of working now. Its just drawing something very weird. I used http://www.songho.ca/opengl/gl_fbo.html to experiment with it. Also someone please tell me how to properly write githut code this is driving me crazy.
GLuint textureId; RenderingEngine::RenderingEngine() { m_samplerMap.insert(std::pair<std::string, unsigned int>("diffuse", 0)); m_samplerMap.insert(std::pair<std::string, unsigned int>("normalMap", 1)); m_samplerMap.insert(std::pair<std::string, unsigned int>("dispMap", 2)); AddVector3f("ambient", Vector3f(0.1f, 0.1f, 0.1f)); m_defaultShader = new Shader("forward-ambient"); glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glFrontFace(GL_CW);
glCullFace(GL_BACK);
glEnable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
glEnable(GL_DEPTH_CLAMP);
// create a texture object
glGenTextures(1, &textureId);
glBindTexture(GL_TEXTURE_2D, textureId);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
//glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
//glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
//glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE); // automatic mipmap generation included in OpenGL v1.4
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, TEXTURE_WIDTH, TEXTURE_HEIGHT, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glBindTexture(GL_TEXTURE_2D, 0);
glGenFramebuffers(1, &fboId);
glBindFramebuffer(GL_FRAMEBUFFER, fboId);
// create a renderbuffer object to store depth info
// NOTE: A depth renderable image should be attached the FBO for depth test.
// If we don't attach a depth renderable image to the FBO, then
// the rendering output will be corrupted because of missing depth test.
// If you also need stencil test for your rendering, then you must
// attach additional image to the stencil attachement point, too.
glGenRenderbuffers(1, &rboId);
glBindRenderbuffer(GL_RENDERBUFFER, rboId);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, TEXTURE_WIDTH, TEXTURE_HEIGHT);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
// attach a texture to FBO color attachement point
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureId, 0);
// attach a renderbuffer to depth attachment point
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rboId);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
}
RenderingEngine::~RenderingEngine() { if(m_defaultShader) delete m_defaultShader; } void draw() { glBindTexture(GL_TEXTURE_2D, textureId);
glColor4f(1, 1, 1, 1);
glBegin(GL_TRIANGLES);
// front faces
glNormal3f(0,0,1);
// face v0-v1-v2
glTexCoord2f(1,1); glVertex3f(1,1,1);
glTexCoord2f(0,1); glVertex3f(-1,1,1);
glTexCoord2f(0,0); glVertex3f(-1,-1,1);
// face v2-v3-v0
glTexCoord2f(0,0); glVertex3f(-1,-1,1);
glTexCoord2f(1,0); glVertex3f(1,-1,1);
glTexCoord2f(1,1); glVertex3f(1,1,1);
// right faces
glNormal3f(1,0,0);
// face v0-v3-v4
glTexCoord2f(0,1); glVertex3f(1,1,1);
glTexCoord2f(0,0); glVertex3f(1,-1,1);
glTexCoord2f(1,0); glVertex3f(1,-1,-1);
// face v4-v5-v0
glTexCoord2f(1,0); glVertex3f(1,-1,-1);
glTexCoord2f(1,1); glVertex3f(1,1,-1);
glTexCoord2f(0,1); glVertex3f(1,1,1);
// top faces
glNormal3f(0,1,0);
// face v0-v5-v6
glTexCoord2f(1,0); glVertex3f(1,1,1);
glTexCoord2f(1,1); glVertex3f(1,1,-1);
glTexCoord2f(0,1); glVertex3f(-1,1,-1);
// face v6-v1-v0
glTexCoord2f(0,1); glVertex3f(-1,1,-1);
glTexCoord2f(0,0); glVertex3f(-1,1,1);
glTexCoord2f(1,0); glVertex3f(1,1,1);
// left faces
glNormal3f(-1,0,0);
// face v1-v6-v7
glTexCoord2f(1,1); glVertex3f(-1,1,1);
glTexCoord2f(0,1); glVertex3f(-1,1,-1);
glTexCoord2f(0,0); glVertex3f(-1,-1,-1);
// face v7-v2-v1
glTexCoord2f(0,0); glVertex3f(-1,-1,-1);
glTexCoord2f(1,0); glVertex3f(-1,-1,1);
glTexCoord2f(1,1); glVertex3f(-1,1,1);
// bottom faces
glNormal3f(0,-1,0);
// face v7-v4-v3
glTexCoord2f(0,0); glVertex3f(-1,-1,-1);
glTexCoord2f(1,0); glVertex3f(1,-1,-1);
glTexCoord2f(1,1); glVertex3f(1,-1,1);
// face v3-v2-v7
glTexCoord2f(1,1); glVertex3f(1,-1,1);
glTexCoord2f(0,1); glVertex3f(-1,-1,1);
glTexCoord2f(0,0); glVertex3f(-1,-1,-1);
// back faces
glNormal3f(0,0,-1);
// face v4-v7-v6
glTexCoord2f(0,0); glVertex3f(1,-1,-1);
glTexCoord2f(1,0); glVertex3f(-1,-1,-1);
glTexCoord2f(1,1); glVertex3f(-1,1,-1);
glEnd();
glBindTexture(GL_TEXTURE_2D, 0);
} void RenderingEngine::Render(GameObject* object) { glViewport(0, 0, TEXTURE_WIDTH, TEXTURE_HEIGHT); glBindFramebuffer(GL_FRAMEBUFFER, fboId); glClearColor(0,0,0,0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); object->RenderAll(m_defaultShader, this);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
glDepthMask(GL_FALSE);
glDepthFunc(GL_EQUAL);
for(unsigned int i = 0; i < m_lights.size(); i++)
{
m_activeLight = m_lights[i];
object->RenderAll(m_activeLight->GetShader(), this);
}
glDepthMask(GL_TRUE);
glDepthFunc(GL_LESS);
glDisable(GL_BLEND);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindTexture(GL_TEXTURE_2D, textureId);
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
glViewport(0, 0, screenWidth, screenHeight);
glClearColor(0, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
draw();
}
I asked several people knowing OpenGL for a long time and none of them knows why this could possibly happen. I tried to debug the code, but yeah, it just seems to happen...
I feel the need to apologize, but I haven't been able to work on this for the past several days. I wasn't feeling all that well and I couldn't really focus on any of the work I've been doing.
So yeah.... I haven't been able to solve this issue yet.
Edit: Also um... I'm currently dealing with my pets. My dog is sad because my mom is not here so she's laying on my lap. And one of my cats has taken roost on my desk.
This issue should probably be closed off as fixed as originally it was a compilation problem around dynamic arrays in Visual Studio. This has since been resolved in the latest commits and the new issue is around rendering to a texture, specifically on nVidia graphics cards.
The issue is not just nVidia, but also Intel, and this issue is about the grey render to texture as well, as seen in the original issue.
The texture size of 16x16 is because in the updateUniforms function of the shader textures are bound.
@Kiwuser it should still have been split in to two issues. The first being that it wouldn't compile in Visual Studio (due to the array issue) and the second being the more prominent problem where the texture is not rendering correctly on some cards.
It could be split into 2 issues, but there's been too much conversation on this issue, so it would be pointless to start a new one.
We have to solve the issue, not argue where it belongs. Also people might think this issue has been solved if we'd close this.
At least edit the post then as the title is completely misleading for anyone who hasn't been involved so far.
That would also be true, so has anyone tried anything new?
Like I said: "The texture size of 16x16 is because in the updateUniforms function of the shader textures are bound."
void TextureData::BindAsRenderTarget()
{
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebuffer(GL_FRAMEBUFFER, m_frameBuffer);
glViewport(0, 0, m_width, m_height);
}
Shouldn't that be:
void TextureData::BindAsRenderTarget()
{
this->Bind(0);
glBindFramebuffer(GL_FRAMEBUFFER, m_frameBuffer);
glViewport(0, 0, m_width, m_height);
}
I haven't been able to fix it, but I think I'm getting close. Someone else should try to fix it, I won't be able to really and sit down and attempt to fix it for a while. Also we should probably delete the conversations when we finally fix it so people don't have to read all these comments.
hBdq, that's good thinking, but it is supposed to be as it is. The reason is OpenGL doesn't allow textures to be bound as a texture and as a render target at the same time. Binding texture 0 is supposed to ensure that the texture isn't bound.
Now that you've brought it up however, I realize that it doesn't ensure the texture isn't bound because it doesn't unbind it from every active texture. I don't think that's causing this issue, but if someone reading this is experiencing the issue, it might be worth trying looping through all 32 texture units and binding nothing to them before binding the framebuffer.
@BennyQBD as suspected that wasn't the cause of the issue unfortunately
Oh, wow! Whatever Benny changed in the latest update to this repository seems to have made render to texture work for me.
I suggest you all test the latest version of this repository and see if it works for you.
As default the project now renders the original scene to full window size. This is just using the Window as a render target and not rendering to a texture. When I try to put back in rendering to a texture (at a smaller size and clearing the window to blue to see a background change) it doesn't work.
Perhaps I have the render to texture code wrong now though.
@Zammalad This is my proof that the latest repository, when left "as is", does in fact work for me.
If you mean mentioning them, type @ then it will suggest names of the people in this issue.
@Colt-Zero You're right. It works.
@Zammalad
#include "renderingEngine.h"
#include "window.h"
#include "gameObject.h"
#include "shader.h"
#include <GL/glew.h>
#include "mesh.h"
#include <cstring>
RenderingEngine::RenderingEngine()
{
m_samplerMap.insert(std::pair<std::string, unsigned int>("diffuse", 0));
m_samplerMap.insert(std::pair<std::string, unsigned int>("normalMap", 1));
m_samplerMap.insert(std::pair<std::string, unsigned int>("dispMap", 2));
SetVector3f("ambient", Vector3f(0.2f, 0.2f, 0.2f));
m_defaultShader = new Shader("forward-ambient");
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glFrontFace(GL_CW);
glCullFace(GL_BACK);
glEnable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
glEnable(GL_DEPTH_CLAMP);
m_altCamera = new Camera(Matrix4f().InitIdentity());
m_altCameraObject = (new GameObject())->AddComponent(m_altCamera);
m_altCamera->GetTransform().Rotate(Vector3f(0,1,0),ToRadians(180.0f));
int width = Window::GetWidth();
int height = Window::GetHeight();
m_tempTarget = new Texture(width / 5, height / 5, 0, GL_TEXTURE_2D, GL_NEAREST, GL_RGBA, GL_RGBA, false, GL_COLOR_ATTACHMENT0);
m_planeMaterial = new Material(m_tempTarget, 1, 8);
m_planeTransform.SetScale(1.0f);
m_planeTransform.Rotate(Quaternion(Vector3f(1,0,0), ToRadians(90.0f)));
m_planeTransform.Rotate(Quaternion(Vector3f(0,0,1), ToRadians(180.0f)));
m_plane = new Mesh("./res/models/plane.obj");
}
RenderingEngine::~RenderingEngine()
{
if(m_defaultShader) delete m_defaultShader;
if(m_altCameraObject) delete m_altCameraObject;
if(m_planeMaterial) delete m_planeMaterial;
if(m_plane) delete m_plane;
}
void RenderingEngine::Render(GameObject* object)
{
m_tempTarget->BindAsRenderTarget();
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
object->RenderAll(m_defaultShader, this);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
glDepthMask(GL_FALSE);
glDepthFunc(GL_EQUAL);
for (unsigned int i = 0; i < m_lights.size(); i++)
{
m_activeLight = m_lights[i];
object->RenderAll(m_activeLight->GetShader(), this);
}
glDepthMask(GL_TRUE);
glDepthFunc(GL_LESS);
glDisable(GL_BLEND);
//Temp Render
Window::BindAsRenderTarget();
Camera* temp = m_mainCamera;
m_mainCamera = m_altCamera;
glClearColor(0.0f, 0.0f, 0.5f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
m_defaultShader->Bind();
m_defaultShader->UpdateUniforms(m_planeTransform, *m_planeMaterial, this);
m_plane->Draw();
m_mainCamera = temp;
}
This should render the scene in the full window but with very low resolution.
@BennyQBD Your new commits seem to fix the render to text issue.
@Colt-Zero that code is not left as is, you must have modified something, please be more specific as to what you un-commented or added.
@EvilSocket1 I'd like to see it working as it was in the actual tutorial though to be sure and not just assuming it is working. A low resolution render of the scene at full size isn't the same as rendering to texture.
@Zammalad It is rendering to small texture and using that texture as a diffuse texture for a mesh. Look at the code, it is definitly rendering to a texture.
@Zammalad
I only changed a couple of things. Very small stuff really.
I turned the render to texture code in the rendering function back on.I resized the render to texture plane so that it didn't occupy the full window. And I switched one of the plane's in the scene to the monkey head model, so that there could be more distinction between the scene Benny was rendering in the video and the scene I was rendering.
I'm assuming this problem is solved for everyone.
So what was the issue and how was it fixed?
//EDIT: Ok after a while i figured it out. I can not really explain why but i can point to the place in code where it happens. So anybody follow along with the tutorial (and perhaps coding in another language) can solve this issue without doing blind git pull from the cpp git repository and hoping it is somewhat solved. All this issue has to do with creation of the plane primitive in the rendering engine, where we want to draw the texture on. here is the code in question https://github.com/BennyQBD/3DEngineCpp/blob/3643710fe242b8381e5e59ed50e723cba1b44c76/3DEngineCpp/renderingEngine.cpp
line 50-57:
Vertex vertices[] = { Vertex(Vector3f(-1,-1,0),Vector2f(1,0)),
Vertex(Vector3f(-1,1,0),Vector2f(1,1)),
Vertex(Vector3f(1,1,0),Vector2f(0,1)),
Vertex(Vector3f(1,-1,0),Vector2f(0,0)) };
int indices[] = { 2, 1, 0,
3, 2, 0 };
and creating a mesh from it on line 60;
g_mesh = new Mesh(vertices, sizeof(vertices)/sizeof(vertices[0]), indices, sizeof(indices)/sizeof(indices[0]), true);
this is causing the strange effect one can see in the original post. Perhaps it has something to do with the texture coordinates.. i really don't know. But the only thing you have to do to make this running is to load a mesh with – i think that's the reason – with proper texture coordinates. If you choose the Monkey or the Box .. or any other object it automagically works. this is what the code in the later commits makes it work e.g.
m_plane = new Mesh("./res/models/plane.obj");
I hope someone find this useful.
The whole -gray/brown/what have you- plane issue has in my opinion little to do with hardware. I found out that when you don't use assimp for setting up the mesh where the scene will be rendered to, like Benny does in this tutorial, you just calculate the tangents and adjoin them as a vertex attribute: problem solved! I think the problem is caused by a badly shaped tbn matrix messing up the texture coordinates in the shaders (this matrix will be filled with zero tangent vectors from the default constructor, unless you calculate them). You can find the function which is responsible for doing this tangent calculations on GitHub,...somewhere in the mesh class. Here's some example of the rendering to texture scene, not using assimp (assimp will do this calculuation for you automatically). I rotated the camera a bit as you can see: Peter
Visual Studio is not capable of compiling your project anymore. This is, because you used
GLenum drawBuffers[m_numTextures];
and Visual Studio doesn't want you to create dynamic arrays this way. The easy fixGLenum* drawBuffers = new GLenum[m_numTextures];
does not work either (but at least it compiles). It gives you this: