Closed datahead8888 closed 8 years ago
Related tasks:
@Quintus @Luiji @brianvanderburg2 @sauer2 --- I'd be interested in your thoughts.
To be honest, that more or less sounds like a complete rewrite in terms of efforts.
To be honest, that more or less sounds like a complete rewrite in terms of efforts.
I'm just mind-blown by all the bullet points. =(
Let's remember to pace ourselves here. I doubt we're going to get the rendering system or the physics system finished by 2.0.0. @sauer2 is right that this is a lot of effort (though I don't think it's nearly a complete rewrite).
For transitions like these we'd need to do a lot of incremental changes. For instance, we'd slowly migrate off our use of OpenGL data structures where OpenGL isn't being used directly, isolate OpenGL usage to few enough functions that it's practical to rewrite them using a new API, etc.
The way I see it is this should all be seen as a background process while we're doing everything else. I'm still struggling to get CEGUI to work while CMake keeps breaking, and there's a plethora of features that we can implement that don't depend on any of this.
TL;DR This should all be considered low-priority, long-term concepts.
I'm just mind-blown by all the bullet points. =(
I'm sorry it's so long, but I wanted to express that there are other things to consider before just assuming that we should replace OpenGL with a higher level library or replace physics with a library. I probably should have left out the geometry and image processing items for now, but this is actually a discussion about both development philosophy and what solutions to use. If you want I could downsize the above post a bit, but most of it was points I wanted to convey. I did try to find a compromise, though.
EDIT: Trimmed the above post down.
I'm also going to point out that SFML and Allegro both allow direct OpenGL access (which I've used successfully before) so we're not giving up any control, just using external routines to simplify the code. Though, for the purposes of SMC, I honestly don't think we'd ever need 3-D primitives, even for some of the fancier effects.
To exemplify how far SFML and Allegro work with direct access, I've written a 3-D game on top of Allegro just fine. Small bits of custom OpenGL in our code is fine. My main grief with SMC as it is is that we use SDL and implement a 2-D engine directly on OpenGL when we could replace SDL with something that comes with a 2-D engine and simplify our OpenGL use to basic 3-D code (which doesn't even exist yet).
Also, could you please extend your description of a "partial 3-D mode"? There's a few things I can imagine that meaning.
I'm also going to point out that SFML and Allegro both allow direct OpenGL access
I saw this for Allegro but didn't know SFML had it. To your point, OpenGL could be used with these later if 3D is needed. My reasoning was that a fully 3D API would allow easy 3D usage if needed later without necessarily delving into things like GLSL. Are all API calls hardware accelerated through OpenGL?
I honestly don't think we'd ever need 3-D primitives, even for some of the fancier effects.
When doing a 2D projection rather than a perspective projection (3D depth), you can still make use of z coordinates if needed. 2D is a subset of 3D.
My main grief with SMC as it is is that we use SDL and implement a 2-D engine directly on OpenGL when we could replace SDL with something that comes with a 2-D engine
If we end up using a high level API that provides the windowing functionality SDL provides, I also think we should get rid of SDL.
Also, could you please extend your description of a "partial 3-D mode"?
Yoshi's island is primarily a 2D platformer game with sprites. However, when you fight Baby Bowser at the end, you can throw eggs in 3D (while moving in 2D) to defeat him. This game also has a number of platforms that rotate in 3D while you interact with them in 2D. I'm not suggesting we do this any time soon, but I think it makes sense to make decisions that easily allow for it.
What thoughts do you have on physics API's? Thank you for sharing your thoughts.
If we end up using a high level API that provides the windowing functionality SDL provides, I also think we should get rid of SDL.
My whole idea is replacing our current SDL installation. The thing is, the latest SDL variants come with the same functionality that SFML and Allegro do, so it's all about API preference.
What thoughts do you have on physics API's? Thank you for sharing your thoughts.
I never really understood why SMC needed a full-on physics engine. I imagine FluXy had something in mind I didn't know about.
I never really understood why SMC needed a full-on physics engine. I imagine FluXy had something in mind I didn't know about.
What do you mean by SMC not needing a full-on physics engine? The wiki discussion I found on the main site made references to a switch to the Bullet physics API.
Do Allegro / SFML support any type of lighting system (#94)?
What do you mean by SMC not needing a full-on physics engine? The wiki discussion I found on the main site made references to a switch to the Bullet physics API.
I'm aware of the wiki... I know that it's been on the to-do I just don't know why it's been on the to-do.
Do Allegro / SFML support any type of lighting system (#94)?
No, but they do support shaders so we can implement that ourselves if we so desire.
A very interesting discussion. Thank you @datahead8888 and @Luiji, I really apreciate it especially as I don’t know about graphics programming.
My whole idea is replacing our current SDL installation. The thing is, the latest SDL variants come with the same functionality that SFML and Allegro do, so it's all about API preference.
I also have this feeling... This is not something that is open to arguments anymore, so I suggest we will vote on what to use.
I never really understood why SMC needed a full-on physics engine. I imagine FluXy had something in mind I didn't know about.
A real 2D physics engine would remove the need to code physics ourselves. It reduces code maintenence work and gives a more accurate physics feeling in SMC.
@datahead8888 has pointed out to me that writing really good OpenGL code is hard, and an abstraction layer such as Ogre, Irrlicht, or whatever will do it better than an average developer, plus the ability to also generate DirectX code on win32 platforms (although I have to admit that enforcing OpenGL even on Windows is something I would without having any problems with it -- go die proprietary solutions). Therefore, I think relying on an OpenGL abstraction layer is a good thing IMO. As said, the final decision on which one will probably be decided with voting.
Vale, Quintus
It sounds like the group has decided to use libraries for higher level rendering & physics, unless any of the others want to cast dissenting votes.
My suggestion is to wait on voting as to what API's to use until someone can do some research into what the 3D alternatives offer. I think it might make sense to consider an API that directly supports lighting if possible, and I think there can be advantages for 3D support in case we unexpectedly need some 3D processing (another good example of this is using z coordinates with blending (transparency)).
I don't really have direct experience with Allegro, SFML, Box2D, Bullet, or Ogre, but I think it makes sense to allow some time to look into them and not simply choose what looks easiest for 2D. This would allow for a more informed decision. There's no rush on this.
@Luiji - thank you for reading through my whole post above and having patience on a hot topic. It's appreciated. @Quintus - thank you, too.
Here's my basic opinions on all the libraries available, for reference (as I've used all of the ones discussed but the latest SDL 2, where I've only peered at it's documentation). Note that all of these are abstractions on top of OpenGL, and some (Allegro, Irrlicht, Ogre) of them are abstractions on top of DirectX. I know Allegro and SFML allow direct access to the underlying system (OpenGL/DirectX) and, in most likelihood, they all do.
APINAME_CamelCase
naming scheme. I also can't tell if there's any support for Shaders like Allegro has, but back when Allegro didn't have the shading API I found that it's pretty easy to integrate it ourselves. I can't tell if the C++ binding, SDLmm, works with SDL 1 or 2, but it probably only works with 1. Again, shouldn't be too hard to write a light binding if we want, but I'd have more difficulty with this since the API documentation is much harder to navigate.I have no opinion on the various physics libraries because I've never found a need to use them. Before we do anything on that I think we should all fully read this http://www.learn-cocos2d.com/2013/08/physics-engine-platformer-terrible-idea/ that @datahead8888 linked as a sort of cons to go with all our pros before voting on that or whatever we do. I've only skimmed through it and I want to get a full reading in before saying anything else on that matter.
So yeah, there's where I'm at. I've worked with a lot of graphics APIs because there was a summer where I was just trying to find which one I liked best. For awhile there it was Allegro but lately I've been using SFML. Ogre and Irrlicht only crossed my desk when I was trying out 3-D graphics programming and even there Irrlicht installed like a breeze and Ogre took years (in tech time). I often use different APIs for small projects to give myself a refresher so I can work with them if I ever have to (the same thing I do with programming languages).
@Luiji Apart from the fact that a jump and run game doesn't suit to real physics - which engines try to simulate - it may also require a total rewrite of the game loop and the timing dependencies to suit fixed timesteps. Then again, judging from the games behaviour - it totally eats one CPU core if Vsync is off - this seems to be done already. Still, I think the real vs jump n run physics is argument enough to not use an 3rd party engine.
@Luiji, thank you, now I understand why you were raising concerns about 3D rendering libraries. I would be curious if there is a 3D rendering library that is more light weight and/or that supports lighting systems. I think I'll make a post on gamedev.net and see if anyone there can offer guidance. If I had to choose between Allegro and SFML, I would probably tend to choose the one that's more widely used.
If we don't end up using a physics / object collision library, we could try to find some existing slopes collision code that has a permissible license and use that. We are already using some borrowed object collision code under the MIT license from what I saw. Super Tux ran into a lot of nasty issues when they did slopes themselves. It would be interesting to look into an implementation, though.
If we don't end up using a physics library, I'd personally be interested in upgrading Open GL, as I'd view this as consistent with using our own physics system. Also, the performance issues in writing your own renderer are less prominent in 2D than in 3D. That's just my personal opinion, though -- the group can decide whatever it wants.
I have no opinion on the various physics libraries because I've never found a need to use them. Before we do anything on that I think we should all fully read this http://www.learn-cocos2d.com/2013/08/physics-engine-platformer-terrible-idea/
I read through the entire article and got the feeling the author doesn’t want to have real physics. A physics engine is fine if you want to implement real physics. If you don’t want real physics, then of course you shouldn’t use a physics engine. If for example you start out with the need "my object is not allowed to bump off the ground when landing" (which is real physics behaviour), then the entire comparison between using a physics engine and implementing all the behaviour yourself is flawed. A physics engine is not meant to cater for such cases, and if you want to force it into that, you of course have to override its behaviour.
The article made clear one thing to me, though. Do we want real physics in SMC? It would be a nice difference to almost any other platformer out there. On the other hand, people do not expect this from a platformer.
But still, we have a performance problem in SMC. Has anyone run SMC through gprof
yet? I bet the collision stuff takes up much of the computation time, so it has to be changed in some way or another anyway.
As for the OpenGL library thing: I agree with @Luiji that Ogre is waaaay to large. Even Irrlicht appears to not really suit us. Note also that SMC currently uses SDL 1.x, so it may be easier to upgrade to SDL 2.x than to change the engine completely. But even if we stay with SDL, we should remove the custom OpenGL code in SMC anyway and rely on what SDL (or any other engine) gives us except for special cases (which should be commented in the code as to why). If upgrading to SDL 2.x appears to be so much work we could even use another engine, then we can do that as well. C or C++ doesn’t matter I think, good C libraries also use object-oriented programming by accepting their notion of this
as the first parameter to any function. No need to write wrapper libraries if there are none.
Valete, Quintus
@carstene1ns posted a first hit on SDL 2 support: https://paste.xinu.at/NIZ5Ov/
Vale, Quintus
I'm not saying we should not use a renderer (Allegro / SDL / SFML), but there was a point that FluXy brought up:
At http://www.secretmaryo.org/phpBB3/viewtopic.php?f=6&t=9120&p=41892&hilit=Allegro#p41892 he said:
Don't use Allegro :P OpenGL and later OpenGL ES is the best way to go.
Here FluXy was suggesting OpenGL itself and seemed to suggest that OpenGL ES is a good way to get mobile games support. Whatever we decide on using a renderer or using bare OpenGL should take into account mobile support.
Also, @carstene1ns challenged whether or not support for the old OpenGL will be dropped anytime soon. FluXy had the same response at this (old) posting: http://www.secretmaryo.org/phpBB3/viewtopic.php?f=6&t=2885&p=17575#p17575. We probably need to find a hard source that confirms how soon support will be dropped for the old OpenGL. We neither want to waste developer time on something that won't happen for 10 years nor have it suddenly break for users when it's suddenly dropped.
@datahead8888 SDL, Allegro and SFML are not rendering systems. They had software renderers built-in these days, but today they all depend on OpenGL. I think the point of Fluxy was that Allegro
About OpenGL support, that is a non-brainer: Versions older than OpenGl 3.0 are obsolete, driver support is sometimes dropped for older versions, so you want to go with 3.x or higher. Support for OpenGL has to be checked for each feature, not per version, because driver support varies.
Here is an article (from 2013) to give you an idea what that could mean: https://de.dolphin-emu.org/blog/2013/09/26/dolphin-emulator-and-opengl-drivers-hall-fameshame/?nocr=true
So, I dug around the OpenGL website, and headed over to the official OpenGL FAQ, which then in turn provided me with this fine article:
https://www.opengl.org/wiki/Legacy_OpenGL
I can confirm we use at least the glLoadIdentity()
function (because I just stumbled over it), which has last been specified in OpenGL 2.1 (that is, SMC uses OpenGL 2.1). Current version of OpenGL spec is 4.5. The linked article on legacy OpenGL states this on support of old versions:
Both AMD and NVIDIA provide backwards-compatible implementations at least on Windows and Linux. Apple does only provide an implementation of the core profile and supports core OpenGL 3.2 on Mac OSX. Intel provides an implementation for Windows up to OpenGL 3.1 with Sandy Bridge CPUs and OpenGL 4.0 with Ivy Bridge CPUs. However, Intel's Linux open-source driver developers have recently stated that they will not provide backward-compatibility on Linux.
I derive from that:
The article on legacy OpenGL closes with this:
GL 3.x hardware is widely available these days, though 4.x hardware has been around for several years now and will only become more widely available. Which you choose is up to you and your needs.
That is, 2.x OpenGL hardware is not even considered anymore.
Judging from this information I guess we need to move to a newer OpenGL version.
Opinions?
Valete, Quintus
@Quintus Judging from http://stackoverflow.com/questions/8044882/difference-between-opengl-3-x-and-4-x it seems like 3.x and 4.x aren't completely different, but differ in range of features.
I'm not the developer, but if i were, I would - if painless enough - limit the used features to 3.x to support as many platforms as possible. So that, once 3.x is deprecated, only a compilation switch to 4.x is required.
@Quintus, if our understanding from that article you linked to is correct, this issue is more urgent than the CEGUI upgrade.
I was speaking to grumbel, the project lead on Super Tux, and he quoted these sources. He initially was of the opinion that an OpenGL upgrade is not necessary for Super Tux.
NVidia - From https://developer.nvidia.com/opengl-driver :
Is NVIDIA going to remove functionality from OpenGL in the future? NVIDIA has no interest in removing any feature from OpenGL that our ISVs rely on. NVIDIA believes in providing maximum functionality with minimal churn to developers. Hence, NVIDIA fully supports the ARB_compatibility extension and Compatibility profile, and is shipping OpenGL drivers without any functionality removed, including any functionality that is marked deprecated. Will existing applications still work on current and future shipping hardware? NVIDIA has no plans for dropping support for any version of OpenGL on our existing and future shipping hardware. As a result, all currently shipping applications will continue to work on NVIDIA's existing and future hardware.
AMD - From: http://developer.amd.com/community/blog/2011/08/10/amd-releases-catalyst-beta-drivers-for-opengl-4-2/
At the same time, these innovations maintain full backwards compatibility, which means that developers can start using these new features whenever they choose, while still getting the most out of existing AMD Graphics across multiple operating systems and platforms.
grumbel also said:
OpenGL3 is only supported by Geforce8xxx cards and up
He thus said the upgrade may break the game on many cards (probably a bit older ones I assume but still a possible problem).
When I brought up the points about Mac OpenGL support, OpenGL ES support (mobile device support), and support on some Linux distros, he then said:
I am still not really sure on the MacOSX situation, as far as I understand it you have to stick with OpenGL3.3Core when you want new features, but you can still use OpenGL2 if you want to. It's just that you can't mix them I think
Toward the end of the conversation, he conceded that Super Tux may eventually have to have two versions of the OpenGL code to work on everyone's machines.
He brought up an interesting point about SDL 2:
datahead8888: you can't really mix OpenGL and SDL2 at the moment It's using GL internally, but it doesn't have any support for inserting your own GL code inbetween the SDL2 render stuff
Super Tux actually is using SDL 2 with Open GL. How did they do it?
We have two separate renderer, one based on SDL_Renderer and one on raw OpenGL
Urrrgh, what’s that. @Luiji, please pop in and share your thoughts on this.
So SDL2 can’t be mixed with custom OpenGL code. This may be good or bad, depending on whether we actually want our own OpenGL code. In any case, we have not the resources to provide two renderers as SuperTux does, therefore we have to make a decision on that.
Both NVidia and AMD, manufacturers of the most graphics cards out there, have stated they won’t drop OpenGL 2.x compatibility if I understand correctly. It would be interesting to know if Intel doesn’t do it neither, then we would have covered all three big manufacturers of graphics cards.
Assuming they also maintain 2.x compatibility for now, and further assuming we decide to use raw OpenGL rather than relying on an abstraction layer, I would suggest to #ifdef
the OpenGL 2.x code and provide OpenGL 3.x alternatives for it that can be enabled with a compile-time option, say -DUSE_OPENGL3
or so. We can then later decide (perhaps when manufacturers finally decide to drop OpenGL 2.x) to make that compilation option the default, and even later then remove the #ifdef
s for the old OpenGL 2.x code.
If we decide to use some abstraction layer, SDL2 for instance, we delegate the problem to the abstraction layer. It won’t go away, but then the devs of the abstraction layer have to solve it, and they are probably more knowledgable about that than we are.
Valete, Quintus
So SDL2 can’t be mixed with custom OpenGL code. This may be good or bad, depending on whether we actually want our own OpenGL code.
Without our own OpenGL code we may be pretty limited on task #94 - the lighting system, unless it turns out SDL or another API provides equivalent functionality with shaders on hardware.
The reason the original SMC used SDL was because this was a common way to handle the Windowing system and to connect it with OpenGL. I'm guessing the paradigms are changing, but I'm obviously not up to speed on the latest SDL version right now.
It would be interesting to know if Intel doesn’t do it neither, then we would have covered all three big manufacturers of graphics cards.
My understanding was that Mac and mobile systems are still issues. We probably don't have any plans to move to mobile soon, but we don't want to handicap ourselves. From what I've heard, though, it's usually hard to control a 2D platformer on a phone unless you plug a controller into it.
I would suggest to #ifdef the OpenGL 2.x code and provide OpenGL 3.x alternatives for it that can be enabled with a compile-time option
At this point, it'd be worth considering writing two renderers (as classes) rather than having a lot of #ifdefs. We would also need to research how to test each separately, possibly on different cards.
If we decide to use some abstraction layer, SDL2 for instance, we delegate the problem to the abstraction layer.
This is obviously more convenient, but there are some things that OpenGL is very good for that we might want to leverage (in the issue list). I'd be interested if @Luiji thinks Allegro/SFML might solve these big problems for us.
@Luiji - you said in #103 that Allegro will add bounds to an image if it is not a power of 2. From this it sounds like it probably deals with both old OpenGL (needs padding to get to powers of 2) and newer OpenGL (should not need padding). If this is correct, Allegro may be a possible solution to the main problems we listed above in which different versions of OpenGL may break on different machines.
Do you know if SFML can work with alternate versions of OpenGL?
After this is resolved, we could still consider using GLSL (OpenGL Shading Language) with Allegro/SFML for tasks such as #94 (the lighting system). In order to be 100% compatible with all cards, we may have to allow this feature to be turned on/off automatically based on card support.
I would like to post a question about this on gamedev.net and stackoverflow.com (and maybe the Allegro & SFML websites) but may not get time for at least a week. I'd also be curious to see if they agree with our conclusions on the physics system. stackoverflow.com did work for resolving our nasty formatting merge conflict problems.
Sorry for the late reply. Luckily, @Quintus messaged me and pointed out that this specific topic requested my input. So many messages that this one passed me by. :/
@datahead8888 Allegro only corrects for power of two when it detects it's necessary. A quick search displayed evidence that SFML does the same. I know a lot more about Allegro because I've spent much more time exploring and experimenting with it's code base, though.
Also, there's absolutely no pressing need to upgrade our OpenGL code. None of the hardware providers are going to kill support for it because too many vendors still depend on it. Especially, it's far outside of the interests of video game companies to upgrade code-bases for older PC games, and the most likely situation is that any hardware designer who presents a gaming chip without these capabilities is going to fail due to lack of "legacy support".
The primary reason that we would want to avoid deprecated OpenGL calls is to support OpenGL ES, which doesn't support most of the older pipelines. This means that the main reason we'd want to consider an upgrade is to support phones, tablets, and low-end notebooks. Even then, many tablets and notebooks still support full OpenGL.
Upgrading will increase the number of platforms we can run on, but that's about all we have to think about on the subject.
As for FluXy's quote, "Don't use Allegro :P OpenGL and later OpenGL ES is the best way to go.", I think he failed to understand that it's not Allegro vs. OpenGL we're debating . All of the platforms we've discussed use OpenGL. It's a debate between SDL and Allegro and another debate between manual OpenGL and automated OpenGL. We have to remember that OpenGL is a mechanism for rendering textured triangles via a graphics card, but everything from window management, input handling, image loading, etc. comes from one of the higher-level libraries.
The two debates are independent as all the high-level libraries we've discussed give us both automated and manual options. It's my opinion that we should use either Allegro or SFML for the high-level library and automated OpenGL since the resulting rendering code tends to be cleaner. We can, as I've said before, use OpenGL directly for special code even if we use the higher-level tools for other purposes. E.g., we can do:
al_draw_rectangle(0, 0, 32, 32, red, 1.0f);
glDoSomething(...);
al_another_thing(...);
Speaking as someone with virtually no technical knowledge or expertise in this area, I'd just like to say...
HTML5! Emscripten / ASM.js / WebGL / Goo Engine cross platform goodness (on the bleeding, untested edge)! All the cool kids are doing it!
I'll run out of the room now before the flames begin...
Emscripten
Hehe. If someone finds the time, he can run TSC and all its dependencies through Emscripten, then it will run in the browser :-)
Vale, Quintus
I tell you, being playable on the web could bring a big, big audience. Even I wouldn't assign it a massively high priority at the moment though.
Also, there's absolutely no pressing need to upgrade our OpenGL code. None of the hardware providers are going to kill support for it because too many vendors still depend on it.
One of my concerns with not upgrading OpenGL in some form is that I remember learning that you cannot mix legacy OpenGL with GLSL (Open GL shading language) for a pipeline. I don't have a hard source to reference on this and am going by memory, but it could block task #94 - the lighting system. It is also possible to compute lighting on the CPU in another thread or on the GPU (graphics card) using an API such as OpenCL, but we may run into more issues getting it to work per fragment (preliminary pixel).
I had understood @Quintus found that Linux systems using Intel chips may not support legacy OpenGL. Reading this again, it does not say Windows with Intel chips do not support legacy OpenGL. As for Macs it says the "core profile" is supported:
https://www.opengl.org/wiki/Legacy_OpenGL
I can confirm we use at least the glLoadIdentity() function (because I just stumbled over it), which has last been specified in OpenGL 2.1 (that is, SMC uses OpenGL 2.1). Current version of OpenGL spec is 4.5. The linked article on legacy OpenGL states this on support of old versions:
Both AMD and NVIDIA provide backwards-compatible implementations at least on Windows and Linux. Apple does only provide an implementation of the core profile and supports core OpenGL 3.2 on Mac OSX. Intel provides an implementation for Windows up to OpenGL 3.1 with Sandy Bridge CPUs and OpenGL 4.0 with Ivy Bridge CPUs. However, Intel's Linux open-source driver developers have recently stated that they will not provide backward-compatibility on Linux.
I derive from that: •On Windows, 2.1 is still supported by backward compatiblity implementations by AMD and NVIDIA, but not by Intel, which only supports 3.1 for future drivers. •On Mac OS X, 2.1 is unsupported. At least 3.2 must be used. •On Linux 2.1 support will soon be dropped at least by Intel’s open-source driver.
--Feel free to correct me if I misunderstood this. It would be very convenient for us not to have to upgrade OpenGL in a hurry. The other question is whether we should continue to have a freeze on OpenGL changes while we figure this out or start developing features that require new OpenGL code.
I can't speak to whether Allegro or SFML is better. I've heard some people complain about Allegro, saying it grew out of its original bounds and that the API reflects it (or something to that effect). However, if Allegro is more popular, it may be better supported, and if @Luiji knows it well, it will be easier for us to implement and support.
I would assume both Allegro and SFML support rotation, translation, scaling, and some form of transparency/blending. Other features such as a stencil buffer could turn out to eventually be useful, though I'd assume we could use OpenGL for these if the 2D API's don't support them. One thing @Bugsbane and I were discussing in #94 was using normal maps, 3D planes, or some other abstraction in order to change the way a custom lighting sytem worked with images that have already had shading based on 3D lighting assumptions. If we can do these options strictly in OpenGL without giving 3D data to Allegro/SFML there won't be a problem; otherwise it's something to consider.
I've been meaning to make forum posts on gamedev.net, stackoverflow, and the allegro/sfml sites. As we've seen above, though, I'm going to need to find a large enough block of time to adequately describe the questions. I can post links when these are finally written.
Emscripten
I had heard about tools like this. WebGL of course could offer an OpenGL solution. Would we have to compile each dependency from source to JavaScript? The other question is performance - JavaScript is interpreted, so I assume any performance issues we already have may very well be multiplied. Processors do keep getting faster over time, though.
Of course, then we'll get stuck testing two kinds of cross builds, but I guess we'll find some sort of middle ground. :)
JavaScript is interpreted
JavaScript is just-in-time compiled and optimized on all popular browser implementations.
The other question is performance - JavaScript is interpreted, so I assume any performance issues we already have may very well be multiplied.
From what I read of the Banana Bread demo (Unreal 3 compiled to Javascript) was that it ran at about half the speed of the native code. While that doesn't sound great, it still looked just fine in practice, and I can't imagine TSC needing the same specs as Unreal.
Before the discussion grows out of bound with regard to Emscripten, please note that
Our primary goal should be the desktop, and if it runs in a browser, I’d consider that as something “that is fine”. Accepting patches for that to work is OK, but we should not place our entire development under this aim. Not everyone uses Firefox, Chrome, or IE as the browser for various reasons, but playing TSC would still be nice.
Valete, Quintus
Blog: http://www.quintilianus.eu
I will reject HTML emails. | Ich akzeptiere keine HTML-Nachrichten. |
---|---|
Use GnuPG for mail encryption: | GnuPG für Mail-Verschlüsselung: |
http://www.gnupg.org | http://gnupg.org/index.de.html |
@Luiji @datahead8888 @brianvanderburg2 Currently, TSC uses SMC’s old SDL1 + OpenGL2 combination (because SDL1 was horribly unperformant on itself). I am under the impression that given the discussion in #10, everyone is fine if we switch to SFML. Note this does not imply SFGUI, which is a separate topic. CEGUI can still be used with SFML as well.
Is everyone fine with switching to SFML as the main rendering framework then?
Valete, Quintus
I'm not a TSC dev so in the end it's your decision anyway.
The time I used SFML (2.1) it worked out nice, but from what I heard on reddit you might run in two problems:
Also, compile times might get worse. That's the price you pay for a modern C++ API.
OpenGL Core Context support, apparently it's still pending: LaurentGomila/SFML#654 EDIT:
The 2.3 release is not too far in the future as far as I can see. The guys in #sfml
talked about 2.3 when I was there and it appeared nearly ready to me.
SFML devs assume you create your own rendering method as their methods don't do batch drawing.
Are you referring to this? http://www.sfml-dev.org/tutorials/2.2/graphics-vertex-array.php To this? http://www.sfml-dev.org/tutorials/2.2/window-opengl.php
SFML works just fine without having to write a renderer. If you want, you can drop to bare OpenGL for performance, but it is not required. However, I’m new too all this, so I’m going to wait for a comment by @Luiji on that topic who said he has worked with it.
Vale, Quintus
Are you referring to this?
Yes.
Good to see those two issues won't be a problem.
I would like to pause a moment and discuss both of sauer2's 2 new points a little more before finalizing this decision.
OpenGL Core Context support, apparently it's still pending
From what Quintus said, it may not be a problem, but I'd be curious if Luiji has any concerns. I'd like to read the SFML team's discussion some more when I get time.
SFML devs assume you create your own rendering method as their methods don't do batch drawing.
Many higher level API's akin to SFML allow you to override a draw() method (if object oriented), in which you write the logic to draw everything to the screen. I would like to understand this point a little more before making a decision if possible.
Also, we have a second ticket that also talks about the OpenGL upgrade. Can we close it as a duplicate, or do 2 tickets still make sense?
I have experimented a bit with this due to your suggestion and found that it’s really easy to use. I have implemented an ungly and simple example of creating an object that consists of several sub-images that are handed to the graphics card at once as one single blob:
Scenery
class that represents all the static scenery in a levelLevelScene
uses it.It is possible to add as many elements as necessary to the Scenery instance, they will all be thrown onto the GPU in one step as one single large sf::VertexArray
, which should be super-performant. Additionally, I still allow collision detection with this by maintaining the bounding boxes separately (but note that the collision detection itself is performed not really performant, just all boxes are iterated and checked).
Even me who never touched a line of OpenGL code can understand how this works <3.
Vale, Quintus
Also, we have a second ticket that also talks about the OpenGL upgrade. Can we close it as a duplicate, or do 2 tickets still make sense?
Yes, this is probably duplicate then.
Vale, Quintus
I have implemented an ungly and simple example of creating an object that consists of several sub-images that are handed to the graphics card at once as one single blob
Yes, I realized afterwards I misread sauer2's point. He's not talking about implementing your own draw method.
The only thing I'm concerned about is this "OpenGL Core Context" thing, but I honestly have no idea what you guys are talking about in that regard. I doubt compile times will be effected in a way that's detectable to a human being. Rendering can definitely be done efficiently.
The main thing we should do to reduce compile times is reduce the code base. Switching from CEGUI to a less complex environment would help in that regard, so long as the target is actually less complex. That's for a different discussion, however.
The only thing I'm concerned about is this "OpenGL Core Context" thing, but I honestly have no idea what you guys are talking about in that regard. [...] Rendering can definitely be done efficiently.
Ok. You are the experienced game dev, so if you say it is not relevant then I will not doubt your judgement.
The main thing we should do to reduce compile times is reduce the code base
As in "remove all the backward compatibility" ;-). Honestly, the compilation time is something I do not worry too much about. cmake
tries best to instruct make
to only compile the files that are affected by a change, and I have to say this usually works nice for me. I rarely have to recompile the entire thing.
I’ve never seen a suggestion to optimise compilation time before. One never has seen everything I guess... So, here’s the obligatory XKCD (© Randall Monroe, CC-BY-NC 2.5):
Vale, Quintus
I’ve never seen a suggestion to optimise compilation time before. One never has seen everything I guess... In the C++ world this has been a topic as soon as many compiler passes were required by templates.
Approaches are ranging from better languages (basically every modern compiled language compiles faster, even those that use C as intermediate code, alone for not having the retarded header/implementation system), replacements for ancient build systems like make (I'll try if ninja works easily and faster on the next complete compilation) and even better linkers, like Valves gold linker.
That's not to say I'm a professional C++ dev, I just watched some talks.
Unrelated: Want a pro-tip for guelkerdev? Make the text panels opaque, transparent backgrounds stress the eyes and may even impede visually impaired people.
@Luiji: Out of interest, are there plans to drop the boost libraries, now that you were planning to migrate the base to C++11?
That's not to say I'm a professional C++ dev, I just watched some talks.
Talks are great :-). Except all the talks I have viewed were Ruby talks... :elephant:
Unrelated: Want a pro-tip for guelkerdev? Make the text panels opaque, transparent backgrounds stress the eyes and may even impede visually impaired people.
Errm, sauer2, this is nice that you make recommendations on my (more or less) commercial site, but I think this is not the place to do so.
Out of interest, are there plans to drop the boost libraries, now that you were planning to migrate the base to C++11?
I was actively looking for a rendering library that does not depend on boost, so the short answer is: yes. The long answer is: Not right now, wiring out all the boost stuff will take time.
Vale, Quintus
I take it then that I can start with SFML porting now? :-)
...yes I know I am pressing with this... I just want to start :-)
Vale, Quintus
That's not to say I'm a professional C++ dev, I just watched some talks.
Talks are great :-). Except all the talks I have viewed were Ruby talks... :elephant:
I guessed you had an origin in web dev and that is how I stumbled over your page. Didn't mean to be intrusive, sorry.
Out of interest, are there plans to drop the boost libraries, now that you were planning to migrate the base to C++11?
I was actively looking for a rendering library that does not depend on boost, so the short answer is: yes. The long answer is: Not right now, wiring out all the boost stuff will take time.
Nice. With Boost and CEGUI gone in long terms that's a real cut.
I take it then that I can start with SFML porting now? :-)
I like how SFML is C++ based unlike SDL. Does it have decent support for all Linux/Windows/Mac, and is it future-stable? SDL looks like it is here to stay, but I've only recently heard of SFML.
It isn't backed by a company, but...
SFML has been there quite some time now and Laurent Gomila has handed/opened development to a community based team, so it's not a one-man show anymore.
If that's what you mean by future-stable.
It runs on Linux/Win/Mac, support for IOS and Android is in the makings.
Am 23.03.2015 um 23:46 schrieb Brian Allen Vanderburg II:
I take it then that I can start with SFML porting now? :-)
I like how SFML is C++ based unlike SDL. Does it have decent support for all Linux/Windows/Mac, and is it future-stable? SDL looks like it is here to stay, but I've only recently heard of SFML.
— Reply to this email directly or view it on GitHub https://github.com/Secretchronicles/TSC/issues/105#issuecomment-85236346.
This task has morphed into the SFML Port's master task. I have changed the title to be more meaningful. #55 (the OpenGL upgrade) and #11 (The SDL upgrade) have both been superceded by this task.
Following @datahead8888’s more sensible approach to porting TSC to SFML, the main work on this task (and its subtasks) has now shifted to the feature-sfml-port2
branch. I have closed the tickets related to the old approach and left only those open that have a meaning in the new approach as well.
Valete, Quintus
This is a task to decide whether to use 3rd party API's for physics and rendering as well as development philosophy for all libraries.
Thoughts on each sub system
There are 3 basic philosophies on how to apply them: 1) 100% - Use 3rd Party API's for everything 2) Complement the 3rd party API's with some custom code where necessary or reasonable 3) Continue using custom systems for these components.
If we take option # 1 for one subsystem, I think that policy should be consistent across all subsystems. In regards to complementing physics systems, I do not mean to run out and start reimplementing dozens of Bullet features. I mean that we might consider extending a physics API for an additional physics feature or implementation. In such a case, we would normally interface with the API (perhaps adding a custom module) unless we had a good reason not to.
There are things I like about option # 3, but given the goals of SMC and views of the developers, I am beginning to believe option # 2 is worth considering. Please do read the article link, though.