Closed ivan-mogilko closed 3 months ago
- The color values are stored in accordance to the game's color depth. 8-bit games have 1-byte palette indices, 16-bit games have classic 16-bit RGB (R5-G6-B5), and 32-bit has a true ARGB (A8-R8-G8-B8).
I would suggest to make AGS4 games 32-bit color depth only. 16bit is a dead format, there is no point in making a game with such a restriction, it makes absolutely no sense. 8bit is only cool for palette effects, but virtually nobody bothers these days. There should be better solutions than making the whole game 8bit.
Game.GetColorFromRGB
is superceded or accompanied withGame.GetColorFromRGBA(r, g, b, a)
It would be fine to have both, we could even move color related functions into a Color
class.
Color.FromRGB, Color.FromRGBA, Color.FromRGB16, Color.FromHSV, etc.
But I should probably explore that idea as a module first.
- Must look out for any hardcoded color values set in the engine. Not sure what to suggest here to ease the search. These has to be replaced with something like
GetHardcodedRGB(index)
(or better name).
Maybe a FromLegacyColor()
or similar?
(Need to find a good form for serialization, maybe something like a comma separated values, and avoid oververbose xml format).
Either "255,255,255,0" or "#FFFFFF00" (with or without the #)
16bit is a dead format, there is no point in making a game with such a restriction, it makes absolutely no sense. 8bit is only cool for palette effects, but virtually nobody bothers these days. There should be better solutions than making the whole game 8bit.
Both formats have at least the purpose of saving disk and runtime memory. For example, one person was recreating very old games in AGS where the player can select a palette to play in (this game was made in 2020). Without 8-bit support that would make the size of the game many times larger. To do the same with 32-bit, AGS would have to support custom shaders first. Paletted gfx is also a form of restricting yourself to a limited range of colors.
At the same time, what is the real downside of keeping these formats?
EDIT: Overall, I suspect, only few people are using these formats, like 8-bit, because AGS does not promote them. There are engines out there, like PICO-8, that specifically promote restrictions. I genuinely wonder if number of people trying to make 8-bit gfx games would increase if AGS was advertising this mode.
EDIT2: In any case, I would not want to do format removal along in this task, I'd prefer to have it as a separate matter.
Must look out for any hardcoded color values set in the engine. Not sure what to suggest here to ease the search. These has to be replaced with something like GetHardcodedRGB(index) (or better name).
Maybe a FromLegacyColor() or similar?
Well, it's not legacy color, it's a hardcoded color still used for certain purposes which do not have any exposed settings, like drawing a built-in dialog window, or default dialog options color, etc (I do not remember them all).
Game.GetColorFromRGB is superceded or accompanied with Game.GetColorFromRGBA(r, g, b, a)
Actually, to save on api entries, Game.GetColorFromRGB
may be expanded with a
argument, with default value of 255.
Oh, something I forgot to mention. 8-bit images and thus 8-bit color values are currently used on masks, so they cannot be thrown out.
I don't want to throw out 8bit images, just 8bit game projects if it reduces the complexity of code.
Mask colors are hardcoded too, but with extra hacks to prevent certain colors.
Mask colors are hardcoded too, but with extra hacks to prevent certain colors.
What do you mean? Mask colors just are 0-255 indexes.
Mask colors are hardcoded too, but with extra hacks to prevent certain colors.
What do you mean? Mask colors just are 0-255 indexes.
Maybe I'm remembering the part used in the editor, but there was a piece of code with hardcoded values up to 31, some whites are turned into red, and above 31 is all red. Now that you make me think of it I never checked what happens ingame while displaying masks with the colors that would be replaced.
Maybe I'm remembering the part used in the editor, but there was a piece of code with hardcoded values up to 31, some whites are turned into red, and above 31 is all red.
No, that is not related to masks, I explained what it is in the ticket's description.
In any case, I would not want to do any format/feature removal along in this task. I'd prefer if it's considered a separate matter.
Currently I don't think that presence of alternate formats will make it difficult to implement 32-bit color values support.
The color values are stored in accordance to the game's color depth. 8-bit games have 1-byte palette indices, 16-bit games have classic 16-bit RGB (R5-G6-B5), and 32-bit has a true ARGB (A8-R8-G8-B8).
Upon more thinking, perhaps it would be better to store color properties simply as 32-bit ARGB or 8-bit index (depending on the context). It will be much easier that way for both users and doing any kind of analysis in the engine. It may still be converted to 16-bit or else when applying to a drawing operation (it's done using fast bitwise operations).
EDIT: amended the first post with proposed order of changes.
Sounds good to me.
An interesting problem of transparent color:
Right now, when the engine loads 32-bit sprites, it replaces all fully transparent pixels (w alpha 0) with standard AGS COLOR_TRANSPARENT (0x00FF00FF). This is done for compatibility, and also to let users draw and check for transparent pixels with DrawingSurface.
If we want a full ARGB support, we can no longer do that conversion, because even fully transparent pixel values may have a meaning.
But then users will no longer be able to check for transparency by comparing pixels with COLOR_TRANSPARENT. They will have to check the alpha channel (upper 8 bits of integer) instead. That is - in generic case. Of course if they draw the image themselves they might still use COLOR_TRANSPARENT,
They could keep comparing COLOR_TRANSPARENT, if they've drawn it with COLOR_TRANSPARENT. Obviously imported images may have other colors hidden in the RGB fields, so we may add a note somewhere about it. I imagine COLOR_TRANSPARENT will have a different value depending whether it's compiled in a 8bit or 32bit game.
Opened a PR here: #2501 mostly seems to work, but still has few minor issues.
EDIT: ready for review now. The two things that I did not do are:
I posted this in comments to #2501, but might repeat here:
I've been wondering, would that make sense to handle existing duality of Color / ColorNumber properties by hiding one of this pair depending on the current game's color depth? That is - display ColorNumber if game is 8-bit and Color (rgb) if game is 32-bit.
But then, it would also be nice to have color displayed on ColorNumber field. Maybe even add a [...] ColorNumber property which opens a palette for selecting a color.
Alternatively, how feasible that would be to merge Color and ColorNumber properties together? Only ColorNumber is serialized, and Color (rgb) field is there only for editing at design time. This makes it possible to remove one, rename ColorNumber's "display name" to just Color. But the biggest question is whether it would be possible to change the looks and functionality of the field depending on game settings. Supposedly the field may have a custom editor attribute, that draws the field and provides [...] button depending on some condition.
Resolved most subtasks from the list in #2501, the rest should be moved to separate tasks.
Problem
Historically AGS stores a script color value in 16-bit format (iirc it is R5-G6-B5) in both 16-bit and 32-bit games. This causes obvious problems in 32-bit games:
Besides that, colors with values 0-31 (this corresponds to lower blue hues in ARGB format) have special meaning: they are forced to refer to the palette indices 0-31, even in 32-bit games. The reason why this is done is, AFAIK, because certain hardcoded graphics in the engine use these palette slots to get drawn.
EDIT: upon quick check of the code, it's not the real game palette where these colors are taken from, but the special 32-slot hardcoded palette. So any changes to the 256-colors game palette do not affect these. This also means that these 0-31 color indices are not "dynamic", they are always resolving to the same RGB values
Proposed changes
In brief, the goal is this:
Further notes:
Game.GetColorFromRGB
is superceded or accompanied withGame.GetColorFromRGBA(r, g, b, a)
int color
value in script API in 32-bit games should reliably be A8R8G8B8, meaning that users may also just make color themselves using bitwise operations. This must be documented in the manual (same for 16-bit games, if it's not already).GetHardcodedRGB(index)
(or better name). EDIT: good start is to search for GetCompatibleColor() calls which have hardcoded numbers as arguments.Upgrading game projects
Unfortunately, the "color" properties are stored as raw values in both the classes and the game project files (and not as RGBs in high/true color games). I think, that for better future compatibility, and readability, it may be better to store RGB values along. (Need to find a good form for serialization, maybe something like a comma separated values, and avoid oververbose xml format).
When importing older projects the color values will be automatically converted to RGB, except for 8-bit games.
Proposed order of changes
Editor:
short
and adjust them toint
, changing format as necessary.Engine:
Script API
Game.GetColorFromRGB(r,g,b)
with an optional alpha value, which is 255 by default.ColorType palette[PALETTE_SIZE];
, where ColorType is a struct with RGB fields.