pioneerspacesim / pioneer

A game of lonely space adventure
https://pioneerspacesim.net
1.57k stars 362 forks source link

Renderer #913

Closed Luomu closed 12 years ago

Luomu commented 12 years ago

Branch here, explore the commits: https://github.com/Luomu/pioneer/tree/renderer

Comments & design help welcome, there's still a lot to be done

Goals: make it possible to port to OpenGL ES2 (or even DirectX if someone wants to) without going crazy get rid of immediate mode drawing remove and reduce opengl calls in general source code - glEverything() reduce code duplication (there are only so many ways you want to draw a quad) ((recurring cases should be primarily wrapped in methdods like FillRect before calling the renderer directly))

The current class definition is short enough to be pasted here (extra comments with XX):

// Renderer base, functions return false if
// failed/unsupported
class Renderer
{
public:
    Renderer(int width, int height);
    virtual ~Renderer();

    virtual const char* GetName() = 0;

    virtual bool BeginFrame() = 0;
    virtual bool EndFrame() = 0;
    //traditionally gui happens between endframe and swapbuffers
    virtual bool SwapBuffers() = 0;

    //render state functions
    //XX expand whenever sensible but this should be enough to demonstrate the idea
    virtual bool SetBlendMode(BlendMode type) { return false; }
    //virtual bool SetState(DEPTH_TEST, false) or
    //virtual bool SetDepthTest(false) ?

    //XX obvious stuff like SetViewport will be added

    virtual bool SetLights(int numlights, const Light *l) { return false; }
    virtual bool SetAmbientColor(const Color &c) { return false; }

    //drawing functions
    //2d drawing is generally understood to be for gui use (unlit, ortho projection)
    //XX here I was still using specialized data structures
    //XX could also be an array of vector3fs and array of Colors - let me know if you care
    virtual bool DrawLines(int vertCount, const LineVertex *vertices, LineType type=LINE_SINGLE)  { return false; }
    virtual bool DrawLines2D(int vertCount, const LineVertex2D *vertices, LineType type=LINE_SINGLE)  { return false; }
    //unindexed triangle draw
    //XX here I started using a generic VertexArray because there are too many attribute combinations
    virtual bool DrawTriangles(const VertexArray *vertices, const Material *material=0, PrimitiveType type=TRIANGLES)  { return false; }
    virtual bool DrawTriangles2D(const VertexArray *vertices, const Material *material=0, PrimitiveType type=TRIANGLES)  { return false; }
    //indexed triangle draw
    virtual bool DrawSurface(const Surface *surface) { return false; }
    virtual bool DrawSurface2D(const Surface *surface) { return false; }

    //virtual bool DrawPoints(...)
    //virtual bool DrawPointSprites(...) //high amount of textured quads for particles etc

    //XX for complex geometry - the data will be processed & cached into something the renderer prefers
    virtual bool DrawBufferThing(BufferThing *thing) { return false; }

protected:
    int m_width;
    int m_height;
};

Core concepts VertexArray: at first I used specialized data structures like ColoredVertex (see DrawLines for this style) but there are too many possible combinations of attributes. Material: holds info on, well, material properties. And textures. Surface: has one vertex array, one index array (for indexed triangle drawing) and a material. BufferThing: Complex, static geometry that is expected to be buffered. Can have multiple surfaces. This is not fully fleshed out and I ran out of names here :) Mesh? Solid? Model? See background for example use.

Renderer choice I have started two renderers (quickly put together - will be rewritten). RendererLegacy is the fixed-function AKA shaderless one. We can keep it around as long as someone wants to maintain it. RendererGL2 is the new OpenGL 2.0 or 2.1 one that aims to shed as much historical baggage as possible. From a player's point of view it will not be possible to swap between these without restarting (not impossible but I wouldn't bother).

Materials & Shaders It is desirable to let the renderer choose and manage the appropriate shaders, for example when drawing unlit 2D lines the GL2 renderer is expected to choose simpleShader or similar. If textures are needed, that can be determined using a Material. It's not always straightforward, for example with the planet atmosphere it's not enouch to communicate the choice of shader but also a bunch of uniforms. Perhaps something called "Effect" that can store the appropriate parameters? See Planet.cpp and rings drawing how this is shaping up.

Unknown territory LMRModel is such a big blob of stuff I don't know how easy it is to make play nice with Renderer. I would like to add a model base class separate from LMRModel anyway. Terrain is another special case. I'd expect to add something like DrawTerrain(Patch *p) that works somewhat like DrawBufferThing. SDL or not, Would be nice if the Renderer could also create the context.

Transformations the classic matrix stack (pushMatrix, popMatrix) we rely on everywhere needs to be replaced somehow. Also, projections. Renderer will have SetProjection or SetCamera type stuff.

Render.cpp - this will be slowly erased. Post processing needs to go somewhere else. I have a branch related to this (see http://github.com/Luomu/pioneer/blob/filters/src/render/RenderPostControl.cpp) but it needs to be reconsidered wrt. Renderer.

Brianetta commented 12 years ago

From a player's point of view it will not be possible to swap between these without restarting (not impossible but I wouldn't bother).

Might it be possible to have certain settings (including the renderer choice) become active at the next hyperspace jump? The game logic, and any plot stuff, should be unaffected by renderer choice, and the whole physical universe gets Big Banged at that point.

robn commented 12 years ago

Might it be possible to have certain settings (including the renderer choice) become active at the next hyperspace jump?

I wouldn't bother. The renderer is fairly fundamental and there should be no good reason to change it.

We could do feature detection and choose one as appropriate at startup, or we could just set it in config. We could also arrange for different renderers to be compiled in and out - you wouldn't want to carry the fixed-function renderer on a ES2 platform.

@Luomu I have some comments but need to study your code first and think things through. I'll try to get you something in the next couple of days. Overall though I think the idea is spot on - well done!

Luomu commented 12 years ago

I've been shuffling things around and experimenting, but no major changes in the direction.

robn commented 12 years ago

Sorry, I've been busy. I'll be back on deck properly this week.

robn commented 12 years ago

This is awesome. I'm 100% behind what you're trying to achieve here.

VertexArray: at first I used specialized data structures like ColoredVertex (see DrawLines for this style) but there are too many possible combinations of attributes.

There's a number of things that might go into a single vertex depending on needs: position, colour, texture coordinate, normal, etc. We probably don't want a heavy structure for things that we aren't using, but predefining every possible combination is equally painful. Could we do some kind of templated class composition to allow different parts of the code to declare their needs, eg (off the top of my head):

class VertexPosition {
    vector3f position;
};
template<class T>
class VertexColor : public T {
    Color color;
};
template<class T>
class VertexTexCoord : public T {
    vector2f texcoord;
};
template<class T>
class VertexNormal : public T {
    vector3f normal;
};

typedef VertexColor< VertextPosition > ColoredVertex;
typedef VertexTexCoord< VertexPosition > TexturedVertex;
typedef VertexNormal< VertexColor < VertexPosition > > LightVertex;

Then a model mesh rendering thing could make itself a MeshVertex or something. Yeah, this is convoluted. Just an idea.

Material: holds info on, well, material properties. And textures.

I expect Texture.h & Texture.cpp will move into the render?

More comments on Material below.

BufferThing: Complex, static geometry that is expected to be buffered. Can have multiple surfaces. This is not fully fleshed out and I ran out of names here :) Mesh? Solid? Model? See background for example use.

We do already have a BufferObject. Steal its name? Could also do something templated eg Buffer<Mesh>,where Buffer knows how to get arbitrary geometry into a buffer.

Renderer choice From a player's point of view it will not be possible to swap between these without restarting (not impossible but I wouldn't bother).

Neither would I. It should be configurable via config.ini (no UI necessary). Someday I'd like it to inspect the hardware capabilities and make the best choice.

Materials & Shaders It is desirable to let the renderer choose and manage the appropriate shaders, for example when drawing unlit 2D lines the GL2 renderer is expected to choose simpleShader or similar. If textures are needed, that can be determined using a Material. It's not always straightforward, for example with the planet atmosphere it's not enouch to communicate the choice of shader but also a bunch of uniforms. Perhaps something called "Effect" that can store the appropriate parameters? See Planet.cpp and rings drawing how this is shaping up.

I'm not sold on Material itself knowing about different types of materials (atmosphere, planet rings, etc), though I see what you're doing with the shader selection. I think the comment above the definition of Material in Renderer.h is kind of on the right track.

So here we're framing shaders as a kind of "programmable material", right? If that's the case then the uniforms aren't so different from the traditional material lighting parameters and texture selections, yes? If that's the case, how about (roughly)

struct Material {
    Color diffuse;
    Texture *texture0;
};
struct ShaderMaterial : public Material {
    Shader *shader;
    ShaderParameters *shaderParams;
}

// structurally similar to Texture
class Shader {
public:
    void Activate(ShaderParamaters *params = 0) {
        glUseProgram(m_program);
        if (params) params->Apply(m_program);
    }
    void Deactivate() {
        glUseProgram(0);
    }
private:
    GLuint m_program;
};

// class for shader paramaters. one explicit that could be parameterised via defines of templates (like the SHADER_UNIFORM defines)
class ShaderParams {
public:
    vector4f foo;
    float bar;

    void Apply(GLuint program) {
        glUniform4f(glGetUniformLocation(program, "foo"), foo.w, foo.x, foo.y, foo.z);
        glUniform1f(glGetUniformLocation(program, "bar"), bar);
};

// or, a map-based construction. possibly more flexible, possibly slower and more dangerous
class ShaderParams {
private:
    class Uniform {
        virtual void Apply(GLuint program) = 0;
    protected:
        Uniform(const std::string &_name) : m_name(_name) {}
        std::string m_name;
    };
    class UniformVec4 : public Uniform {
    public:
        UniformVec4(const vector4f &v) : m_value(v) {}
        virtual void Apply(GLuint program) {
            glUniform4f(glGetUniformLocation(program, m_name.c_str()), m_value.w, m_value.x, m_value.y, m_value.z);
        }
    private:
        vector4f m_value;
    }
    class UniformFloat : public Uniform {
    public:
        UniformFloat(const float &f) : m_value(f) {}
        virtual void Apply(GLuint program) {
            glUniform1f(glGetUniformLocation(program, m_name.c_str()), m_value);
        }
    private:
        float m_value;
    }

    std::map<std::string,Uniform> m_uniforms;

public:
    void SetVec4(const std::string &name, const vector4f &v) {
        m_uniforms[name] = UniformVec4(v);
    }
    void SetFloat(const std::string &name, const float &f) {
        m_uniforms[name] = UniformFloat(v);
    }

    void Apply(GLuint program) {
        for (std::map<std::string,Uniform>::iterator i = m_uniforms.begin(); i != m_uniforms.end(); ++i)
            (*i)->second->Apply(program);
    }
};

Unknown territory LMRModel is such a big blob of stuff I don't know how easy it is to make play nice with Renderer. I would like to add a model base class separate from LMRModel anyway.

I think a good first step for for LMR would be to separate the Lua "compiler" and mesh generation from the renderer. So the Draw becomes (vaguely):

    Mesh dynamicMesh = lmr->GetDynamicMesh(modelName, lmrOrGameState);
    renderer->Draw(m_staticMesh);
    renderer->Draw(dynamicMesh);

Obviously Mesh means something that can cover the vertex arrays/indicies, materials, etc. I dunno what it looks like inside.

Terrain is another special case. I'd expect to add something like DrawTerrain(Patch *p) that works somewhat like DrawBufferThing.

Same may apply there? renderer->Draw(patch->GetMesh())

SDL or not, Would be nice if the Renderer could also create the context.

Do you want it to actually create the context, or just give the option of passing some parameters for its creation? In the latter case some kind of Screen or class that takes context parameters in its constructor, and then subclass into SDLScreen, GLUTScreen, whatever.

Transformations the classic matrix stack (pushMatrix, popMatrix) we rely on everywhere needs to be replaced somehow.

This is moving us towards having C++ objects that represent things, yes? Then surely there isn't a need to push/pop GL state? Don't we just say thing->Draw() and it sorts all that out?

If its just about transformations, can we give each drawable its own position, rotation and scale that we translate to before drawing it? Maybe we need a Scene object that things are drawn "relative" to that takes care of this?

Also, projections. Renderer will have SetProjection or SetCamera type stuff.

You can probably borrow bits of Camera.cpp for that?

Render.cpp - this will be slowly erased. Post processing needs to go somewhere else. I have a branch related to this (see http://github.com/Luomu/pioneer/blob/filters/src/render/RenderPostControl.cpp) but it needs to be reconsidered wrt. Renderer.

Given recent developments you can just drop it for now :)

How about if we have a PostProcessor (or better, Filter) baseclass, with BlurFilter/SpinFilter/FireFilter/whatever subclasses. Then we do renderer->AddFilter() repeatedly to add a chain of these things?

Luomu commented 12 years ago

The key question now is, how complete do you want this to be before merging is possible? It think it would be productive to get the basics in asap and then proceed with porting the game code... I've done a bunch of this already, but I would be ready to throw all these changes away, and then redone one by one so things can be better reviewed.

For things left to be done, I've organised them by importance.

Priority one:

The business with the core data structures must be worked out. Core data structures are VertexArray, Material, Surface and StaticMesh (ex-BufferThing, I think this name fits). I've been considering making them refcounted, it would make it nicer to share materials for example...

Do you want all this stuff under Render namespace? I sort of neglected that.

I am too tired at this minute to think about improving the VertexArray. But I like that it is currently flexible:

VertexArray va; //could set flags for attribs you intend to use
va.Add(position, colour)
va.Add(position, colour)
//I've been adding the Add() overloads since it saves typing, but it is good to be able to add attribs individually:
va.positions.push_back(position);
va.colors.push_back(colour);
//This system just doesn't offer compile-time safety:
va.Add(position)
//oops, forgot a colour... let's hope an assert catches it

I'm fine with this really, since I'm careful ;)

Priority two:

Shader usage

I'm going to drop the MaterialType approach.

We don't actually have that many special shaders now, by which I mean ones where common material properties are not enough. I count: Background stars Planet rings. Planet surface. Planet atmosphere. That's it! I would just hack it like this:

in geosphere.cpp:
GeosphereSkyShader *s = geosphereSkyShaders[numlights];
s->SetSomeUniforms...
Material mat;
mat.shader = s;
renderer->DrawTriangles(skyVertexArray, &mat, etc);

in RendererGL2::DrawTriangles:

if (m->shader) {
   //oh, I see you brought your own. I assume you've already set your special uniforms.
} else {
   //determine from material props as usual
}

Then make it better (with Effect, or Material subclass) after things have progressed.

Transformations

I have to prototype this...

There's no need to follow the stack model. We have a matrix class, so it could be to translate, rotate, etc. To communicate this to the renderer, it will either be: renderer->SetModelViewMatrix(&matrix) or tie the transforms to camera, perhaps? activeCamera.SetViewTransform(&matrix) renderer->DrawStuff() //renderer will have a pointer to the active camera.

I don't really want to pass the transforms as parameters to the Draw* functions.

Port LMRModel Nothing new to say about this, except it might be more straightforward than I though. Just keep replacing draw calls, then see if it can be improved further. I started making a common Model baseclass (for ModelBody to use), something broke and I gave up.

Port Terrain No new revelations about this. Since the rendering parts are nicely contained in GeoSphere.cpp, I am not too concerned that things will break if this is initially skipped.

Priority three:

A bunch of state change functions need to be added in style of SetBlendMode. I'd start adding these along the porting work. There will be no gl* functions in the game code once this is done.

Textures

I expect Texture.h & Texture.cpp will move into the render?

You mean move to the folder/namespace? They could be moved. What I would expect to happen, is that the current textures are reduced to contain only the file name and parameters like wrapmode, and the Renderers will have to provide their preferred method of texture loading and caching. Very low priority, since it is not a mess currently :)

Context creation

Do you want it to actually create the context, or just give the option of passing some parameters for its creation?

For LegacyRenderer & GL2Renderer this would just mean that the SDL video init code is moved from Pi.cpp to Renderer's constructor. Renderer constructor can take the desired graphics settings as a parameter. Other Renderers might not use SDL. Absolutely no need to do this in the immediate future.

Luomu commented 12 years ago

This is how the matrix stuff is turning out: e367e7154fce1ab4672f3c834df93e2d4123862a The occasional push/pop can be necessary until everything is unified.

Luomu commented 12 years ago

Now that I've had a bit more time to think and experiment I might just follow this suggestion:

class VertexPosition {
    vector3f position;
};
template<class T>
class VertexColor : public T {
    Color color;
};
template<class T>
class VertexTexCoord : public T {
    vector2f texcoord;
};
template<class T>
class VertexNormal : public T {
    vector3f normal;
};

typedef VertexColor< VertextPosition > ColoredVertex;
typedef VertexTexCoord< VertexPosition > TexturedVertex;
typedef VertexNormal< VertexColor < VertexPosition > > LightVertex;

Then a model mesh rendering thing could make itself a MeshVertex or something. Yeah, this is convoluted. Just an idea.

Or just define a bunch of structs as they are needed, after all. I like to do this:

struct Vertex {
   vec3f position;
   vec4f color;
}
...
//fill data
...
//can be buffered at once
glBufferData(..., numvertices*sizeof(Vertex), 0, data);
//easy to set attrib pointers
vertexPointer(3, GL_FLOAT, sizeof(Vertex), offsetof(Vertex, position));
vertexPointer(4, GL_FLOAT, sizeof(Vertex), offsetof(Vertex, color));

I'm not sure if I want or can do a system that supports absolutely every possible combination. Anyway, I am going to close this and open a pull request, it's easier to see the changes that way. I hope nobody minds.

robn commented 12 years ago

Phew. I panicked when I got the close notification.