Closed GoogleCodeExporter closed 9 years ago
Hello,
It is quite a hard thing to do, for multiple reasons :
-First, we use SDL for Grafx2. SDL doesn't have any support for multiple
windows,
and doesn't fit well with any GUI toolkit. It is very difficult to have both
wxWidgets and SDL in the same window. It is possible, but you have to redirect
the
keyboard and mouse (and possibly other) events from wx to SDL, which is useless
bloat of the code.
-Second, the code is not designed at all for that, the functions will always
check
where is the menu before drawing anything.
The first one should be in a better state with SDL 1.3, which is not available
yet.
However, i don't think running in multiple windows as a very interesting
feature, i
feel better launching multiple independant instances of the program. You don't
want
to share palette, brush, or anything else between pictures (we don't even have
a
copy&paste function!), and we don't want the interface to look like gimp, with
a lot
of small floating windows all around the screen.
If you want a fresher look, you can use the skin-file done by Ilkse, which will
eventually be the default one for the 2.1 release. It looks much nicer than the
current one, even if it's still limited to 4 colours.
Original comment by pulkoma...@gmail.com
on 3 Mar 2009 at 5:52
Well, the point would be to remove the dependence on SDL from the main code, so
it can use whatever drawing
backend it needs, and whatever UI system you use would feed it input.
Ideally, the code would just maintain a bitmap buffer for the screen, and
accept mouse input through some
function calls. It would also have function calls for changing the tools and
settings and so forth. Then the UI
layer would implement all the widgets and dialogs itself, while the drawing
engine only cares about the bitmap
buffer.
Original comment by paracel...@gmail.com
on 3 Mar 2009 at 6:09
The bitmap is stored in a back buffer, actually, so that's not really the main
problem.
However, mouse input is done trough the get_input function (see file input.c).
There
is a lot of mess involved here, handling both keyboard, joystick and mouse to
move
the cursor, clipping it to screen coordinates, and then handling special cases
(are
we in the magnified area ? in the standard one ? in the menu ?).
The main problem is that most functions clip their effect to the visible screen
area. You can easily lauch something (that's what Enclencher_bouton = "Activate
button") do. The keyboard shortcuts interface is the easy way to send commands
to
the program.
Actually, you could create an independant interface in another window that
would
just send SDL_Events to Grafx2 window and get what you want with no change to
the
code. Or you could change the get_input function to get events from something
else
(using a pipe to another program/thread or something alike would be an example).
The problem is elsewhere : all the graphics are assumed to be 8bit indexed
colors.
That's not the case in a wx window for example, so you have to convert the
picture
to some suitable format (24 bit true color), as you send it to the ui. This is
heavy
work and will probably create problems when doing intensive things such as
"Adjust
picture". We have to take the shortest path to the videoram there. Most
functions in
this part of the code are highly optimized (as in "assume many things about the
source and destination surface"). Take a look at pxsimple, pxdouble, pxwide and
pxtall.c. These are the "pixel handlers" for actual drawing to the screen.
sdlscreen.c groups most of the SDL part, and there is still some code in
divers.c (=
various.c) that may rely on some things about the screen.
Actual drawing in draw.c may also rely on some assumptions and read from the
screen
for some effects, but i'm not sure of that, it may also use the backup pages.
We have to keep latency as low as possible, so the drawing should always take
place
direct-to-screen in the previews. That's why almost all tools are using thin
XOR
lines for that. The real drawing is then done with all effects and brushes,
usually
in the offscreen puicture buffer. But i think some operations will display it
in
realtime, too.
Conclusion : no real heavy problems for the input handling, but a need for
extreme
care in the display system. No latency allowed :)
Original comment by pulkoma...@gmail.com
on 3 Mar 2009 at 6:26
SDL is already converting 8 bit to 24 bit every time it updates the screen
anyway. You can't draw directly to video
RAM when running in a window on a truecolour desktop. A lot of those
optimizations are probably not relevant
on modern hardware. They may even be slowing the code down due to assumptions
that are no longer true (for
instance, it might be faster to work through OpenGL textures).
For the input part, that would probably be a lot less messy if it was separated
as I suggest - the UI layer could
worry about whether input comes from a mouse, joystick, tablet, or whatever,
and just send processed input to
the drawing engine, which can then handle it as it wants.
Original comment by paracel...@gmail.com
on 3 Mar 2009 at 6:40
Yes, OpenGL would be very nice for platforms that have support for it (remember
we're running on gp2x, amigaos, aros and haiku which have no support for 3d
hardware
acceleration with opengl). It could be used to scale up the interface as some
people
like to have everything zoomed x2.
I know SDL is handling the conversion, but when using wxWidgets we get a very
complex process:
-SDL draws to an SDL_Surface in 8bit indexed mode
-We convert it to a 24bit surface in wxBitmap format
-Then we send it to wxWidgets, who converts it again to whatever the screen
bitdepth
is and finally display it.
This is what i have done in another project called cpcsdk (for Caprice
Reloaded, an
Amstrad CPC emulator).
The approach is pretty much the same with Qt. The idea is you can't get direct
access to the hardware, meaning you have to use the API from a toolkit, handle
all
the palette mess by yourself, and so on.
The optimizations are not slowing down much, but for example we are assuming :
-Screen surface is 8 bit/pixel
-Screen surface is linear, with no gap between the lines (that is already not
working on amiga when using a windowed mode, as you get direct access to the
vram
there)
This allows for some optimizations of the innerloops of the "copy" functions
which
upload the picture from the backbuffer to the screen surface, as well as the
one
calculating the zoomed view. With OpenGL, most of this could be done by the
hardware, that would be nice. But we still need the software way as a fallback.
As for the input, that's exactly what the get_input function do. It gets the
event
from sdl, and dispatch them aprropriately, converting some keyboard shortcuts
to the
related mouse moves, for example.
The real input handling for keyboard shortcuts is done with only two variables,
Touche (raw key code) and Touche_ASCII (ASCII code), mostly from
Gestion_principale
("main management"), for the main menu shortcuts, and from the event loops of
each
window, for their own shortcuts.
So with only the global variables Mouse_K (mouse buttons state), Mouse_X,
Mouse_Y,
Touche and Touche_ASCII, you can take control of the whole program. These
variable
should only be modified in the get_input function. Rewrite it or append to it
and
you'll be able to do everything. If a button does not have a shortcut, we
should add
it.
As the keys are pretty standard, one could hide the fact that they are raw
keycode,
by renaming Touche to Event_Request and defining CLOSE_WINDOW=SDLK_ESCAPE, and
so on.
If you remove the SDL windows for the program, then only the loop in
gestion_principale remains. This loop has to run continuously, even if there
are no
events, so tools like the spray aren't freezing when you do nothing. So you
could
directly mess with the operation stack (in operatio.c) to select the drawing
tool. I
think you will just have to set a var to the correct value (something like
Current_operation = Full_circle) then the operation will be launched on mouse
clic.
A little problem is that you must not change operation while
Operation_Taille_Pile
(Operation stack size) is not 0 (that is, the operation is not in "idle"
state).
Doing that could leave weird things on the screen (i'd say nothing could happen
to
the backbuffer, as all that is mainly related to the display of the software
cursor/
preview XOR lines), but, maybe some operations can do some bad things,
particularly
when using drawing effects.
So, everything can be done with a more or less big set of input variables,
which are
all global, so easily accessible from an independant thread if you need to do
so
(seems likely for an independant core/ui arch). This set of var can be more or
less
wide, depending on how many things you let run inside the box.
I'd say Mouse_K, Mouse_X, Mouse_Y, Operation_Taille_Pile (and maybe some other
info
about the operation stack), Principal_Palette (main palette) and maybe some
things
related to the drawing mode and current page (main or spare) should be enough.
Original comment by pulkoma...@gmail.com
on 3 Mar 2009 at 7:16
Well, my point wasn't to tie it to any specific other UI library either. I was
thinking that you could have multiple UI
implementations for different platforms. I want to make an OS X GUI for the
Mac, somebody else might want to
use MUI on AmigaOS, or GTK on Linux, and so on. That is why you'd want to
encapsulate the parts that need to
be the same across all versions, and separate them from the UI stuff as far as
possible.
This would basically require that the code be restructured so that the main
engine does not drive the program,
but the UI layer does. For instance, the UI layer would tell the drawing engine
about mouse events, and would use
some kind of timer to send it ticks to keep things like the airbrush running.
Original comment by paracel...@gmail.com
on 3 Mar 2009 at 7:34
The main engine is not doing much anyway, actually once the operation is set up
it
just calls the right function according to a table of function pointers:
Operations
[mouse buttons state][operation stack size]();
If all the UI/windows mess is removed, i think it's about the only thing left.
So an
external UI to the core engine can plug there quite easily and without much
refactoring. Just set Mouse_{K,X,Y}, select the operation, and you're done.
Call the
appropriate function using this table, ideally once per VBL (that is
unfortunately
not possible with SDL, so we are using an arbitrary delay), and display the
result.
The only other thing you have to take care of is the palette, you need it for
displaying the picture and some operations to get the RGB values of each color
(24b
image loading, transparency and smoothing draw-effects, and i may miss some.
And, of
course, the switch from main to spare page, which is only some pointer swapping.
Original comment by pulkoma...@gmail.com
on 3 Mar 2009 at 7:44
Interesting. I might look into that, then. But as I said, I think I'll wait for
issue #132 to make some progress first.
Original comment by paracel...@gmail.com
on 3 Mar 2009 at 8:09
As I understand it, your goal is to remove the menus and custom windows, to
replace
them with ones designed in another toolkit. ok.
I hope you realize you have to rewrite the 35 settings windows. I can count
easily
350Kb of code (29 windows in buttons.c + shade.c + palette.c)
Then you have to write your own tool menus, and colorpicker.
With multiple "sub-windows" available at the same time, you'll have to invent a
enable/disable mechanism, so only the relevant functions are available at a
given time.
> Well, my point wasn't to tie it to any specific other UI library either.
Sorry if I'm blunt but if you don't base your generic interface on the reality
of
some tookits performances (what is efficient for them), this will have
catastrophic
performances.
It would seem more sane to me to choose an exact list of target UI toolkits,
and THEN
to design the single interface that can actually work with them all.
Original comment by yrizoud
on 3 Mar 2009 at 11:39
Sure. That's not really all that much work in a modern GUI toolkit.
And it's not like you'd have to design it to run on any absurd toolkit. It'd be
easy enough to pick a simple
baseline to target with a few hooks for more performance where needed.
Original comment by paracel...@gmail.com
on 3 Mar 2009 at 11:47
As a start, with the current supported platforms, we'd need SDL (gp2x), MUI
(amiga
68k, amigaos4, morphos, aros), wxWidgets (windows, osX, linux), and Be API
(haiku, BeOS).
wxWidgets and Be API are C++ only, SDL can do both C++ and C, and MUI is C only.
I would not want to work in such a mess in a single project.
So, t would require extracting the core engine from grafx2 and starting new
independant projects from there. We chosen SDL for the extreme portability, and
we
really want to keep that.
What i'd do is wait for SDL 1.3 and see if it brings anything new. I think they
are
planning on supporting multiple windows per application, so we could use that
instead
of our embedded windows (but not on the gp2x, for example)
Original comment by pulkoma...@gmail.com
on 4 Mar 2009 at 7:15
Obviously it would be impossible to do this right away for all platforms. The
point was just to make it possible to
do, for whatever platform somebody feels like working on. The main code should
keep using SDL.
Original comment by paracel...@gmail.com
on 4 Mar 2009 at 11:33
Original comment by pulkoma...@gmail.com
on 7 Apr 2009 at 1:08
Sorry, we wont do anything about it. Feel free to do it yourself, we'll accept
patches :)
Original comment by pulkoma...@gmail.com
on 13 Apr 2009 at 9:58
Original issue reported on code.google.com by
paracel...@gmail.com
on 3 Mar 2009 at 5:40