Open twiddlingbits opened 2 months ago
Yeah, I figured a widget-like implementation would be the easiest to modify/tweak. Though, do we want to make all of the UI elements in the Canvas? I believe there are some JS/HTML games that place HTML elements on top of the canvas to make things simpler. The only problem with that is that much of the design would go into HTML and CSS fields rather than programmatically like a pure C/C++ program would do.
If we want maximum compatibility, copying an existing API would probably work best. The only Widget type system I'm familiar with is GTK which uses a mix of widget objects and css. Since most of those APIs are built on top of the drawing APIs, the fact we are using JS to draw everything shouldn't have much of an effect. However, if we picked an existing implementation, it would likely be more compatible with ported software.
As for a window library, I think the easiest way to set it up would involve allowing the user to create more "virtual" (non-displayed) canvases. Then, each window gets it's own appropriately sized canvas to draw on that can then be copied onto the main one. I don't know if there are optimizations to reduce the amount of copying needed, but it would be the easiest way to separate "windows".
do we want to make all of the UI elements in the Canvas?
This is a good question. I mulled it over for a while (a while back). I think using 100% canvas is probably the way to go. But worth a discussion.
Then, each window gets it's own appropriately sized canvas to draw on that can then be copied onto the main one.
This technique is common with window implementations (its how the Amiga generally worked -- although it was an option when the window was created). With MS Windows, for a long time this (a "backing bitmap") was not an option. You get a "paint" message with a region that needed to be redrawn (because it has just been exposed, for example a window was moved by the user), and the app is responsible for redrawing it. In our case, the programmer would use the 2d2 APIs to draw to a window. Internally, it would likely go to an off-screen canvas, and then on screen portions would be blited to the on-screen canvas. This is assuming we support overlapping or off screen windows. I was also thinking about a different desktop UI that used tabs, like a browser or VS Code. Each app would get its own tab. In this case, the app could render to the on screen canvas if the app was visible, or to an off screen canvas if not visible. There are various optimizations. Anyway, TBD.
If we were to copy an existing API, the obvious candidates are Windows or Mac. Although the Amiga would be cool (at least for me). It is probably also fine to have our own, esp if we could make it cleaner and easier to use. Windows, for example, is pretty convoluted, IMO. I am not familiar with the mac. Amiga is straightforward.
It would probably be more flexible if we created something ourselves. However, basing it on something like Amiga (or another relatively simple API) would probably make it easier. I'm currently trying to think of the easiest way for it to interface with a program.
Either way, I still think an off-screen canvas should be used for each application/window. That way programs that directly modify the canvas will still work properly with little modification.
As for a framework for interfacing, I'm thinking applications have some callback/event framework like this:
// Main Functions:
void init(twrCanvas); //initialize program with the provided canvas or reference pointer
void render(int epoch); //function called with RequestAnimationFrame, includes a time epoch
void tick(int epoch); //called on a set interval similar to render. Talked about more below
// Event Functions:
//maybe not necessary, but it could signify the window going from out of view to in view
// would be useful for any applications that don't redraw the entire frame every cycle
void redraw(int epoch);
void mouse_move(double x, double y);
void mouse_click(double x, double y, int mouse_button);
void key_down(enum KeyCodes key);
void key_up(enum KeyCodes key);
void focus(); //kinda equivalent to redraw in this case, just when window is "focused"
void unfocus(); //window/application goes out of focus
I'm not sure if tick and render should be separated, but I did it like this on the assumption that there are cases where you might want to tick but not render. For instance, if applications can run when "offscreen" or out of focus, tick could allow them to run without needing to render. However, the render could be called anyway on an off-screen canvas and just not copied to the main one for the same effect. This is all under the assumption that everything is either running on the same WASM module or is all called from a root module rather than having multiple, individually running modules.
As for init, I figured that would be the easiest way to pass in the assigned off-screen canvas.
Now for the widget library, I would assume it would run as a "root" program for the user program. It would directly hook into the render/tick functions and event callbacks and then pass down those options to the user via its own API. Alternatively, the user could manually pass in events and manually call render/tick functions to control everything on it.
Either way, I still think an off-screen canvas should be used for each application/window
yes, i agree.
Something along the lines you are thinking makes sense. Here are some thoughts::
Should we make it C++ only. It would make things simpler, but, on the other hand, it will need to be used by C software (like Perfect Sound!)
c++ would be more convenient, but I believe that it could also work with just structs and functions. If needed, the c++ wrapper could just wrap the structs and functions into compatible classes.
AnimationFrame type functionality are only needed for, well, animations, games, etc. For a more simple UI, even with scrolling, its overkill.
I assume then that things like changing button colors, scrolling, etc would just be rendered immediately on event calls rather than as part of the animation loop? That way it only re-renders on event changes?
My thoughts on how "windows", "menus", etc work, are tired to this topic. I wrote some thoughts here https://github.com/twiddlingbits/twr-wasm/issues/26
I put comments on this in the mentioned issue.
I've done some research on Amiga. I'm not sure how it's implemented in the backend, but here's a prototype I've worked out based on it. The main difference with how Amiga did it is that my implementation is a bit more dynamic in how Widgets are programmed, but less flexible in how they're rendered. For Widget definitions, it seems Amiga hardcodes what types of widgets are set up meanwhile my implementation can take in any type of widget as long as it only needs events, rendering, and a free function. I feel like that's all that widgets would really need from a top-level view, but I could be wrong. In that case, a more static approach might be better. Secondly, it seems Amiga has a much more in-depth drawing system with brushes that I didn't implement. I'm not quite sure how in-depth the drawing setup for widgets should be or if it should just allow for raw draw calls. For instance, the button widget could take in an optional render function so the user can do whatever customization they want.
Setup:
enum twrWidgetEventType {
TWR_WIDGET_EVENT_KEYBOARD,
TWR_WIDGET_EVENT_MOUSE,
};
struct twrWidgetEventBase {
enum twrWidgetEventType type;
};
enum twrWidgetKeyboardEventType {
TWR_WIDGET_EVENT_KEY_UP,
TWR_WIDGET_EVENT_KEY_DOWN
};
struct twrWidgetKeyboardEvent {
struct twrWidgetEventBase base;
enum twrWidgetKeyboardEventType type;
int key;
};
enum twrWidgetMouseEventType {
TWR_WIDGET_EVENT_MOUSE_MOVE,
TWR_WIDGET_EVENT_MOUSE_DOWN,
TWR_WIDGET_EVENT_MOUSE_UP,
TWR_WIDGET_EVENT_MOUSE_CLICK,
TWR_WIDGET_EVENT_MOUSE_DBLCLICK,
};
struct twrWidgetMouseEvent {
struct twrWidgetEventBase base;
enum twrWidgetMouseEventType type;
int page_x;
int page_y;
int relative_x;
int relative_y;
};
union twrEventRegistrationUnion {
enum twrWidgetKeyboardEventType keyboard_type;
enum twrWidgetMouseEventType mouse_type;
};
struct twrEventRegistration {
enum twrWidgetEventType base_type;
union twrEventRegistrationUnion secondary_type;
void (*callback)(struct twrWidgetEventBase, void *);
};
struct twrWidgetBase {
char* type;
int x, y;
int width, height;
int visible;
void (*draw)(struct d2d_draw_seq*, void *);
void (*free)(void *);
int num_events;
struct twrEventRegistration* event_registrations;
};
Example button:
struct twrWidgetButton {
struct twrWidgetBase base;
char* text;
char* text_font;
char* text_color;
char* default_color;
char* hover_color;
void (*onclick)(void *);
void* onclick_data;
int hovering;
int initialized;
int text_x, text_y;
};
void twr_widget_button_draw(struct d2d_draw_seq* ds, void * self) {
struct twrWidgetButton* button = (struct twrWidgetButton*)self;
struct twrWidgetBase* base = &button->base;
d2d_save(ds);
d2d_setfont(ds, button->text_font);
if (!button->initialized) {
button->initialized = 1;
//find text_x and text_y such that the provided text is centered
}
d2d_setfillstyle(ds, button->hovering ? button->hover_color : button->default_color);
d2d_fillrect(ds, base->x, base->y, base->x + base->width, base->y + base->height);
d2d_setfillstyle(ds, button->text_color);
d2d_filltext(ds, button->text, button->text_x, button->text_y);
d2d_restore(ds);
}
void twr_widget_button_event(struct twrWidgetEventBase event, void * self) {
assert(event.type == TWR_WIDGET_EVENT_MOUSE && (event.secondary_type.mouse_type == TWR_WIDGET_EVENT_MOUSE_CLICK || event.secondary_type.mouse_type == TWR_WIDGET_EVENT_MOUSE_MOVE));
struct twrWidgetMouseEvent* mouse_event = (struct twrWidgetMouseEvent*)&event;
struct twrWidgetButton* button = (struct twrWidgetButton*)self;
struct twrWidgetBase* base = &button->base;
if (
base->x <= mouse_event->relative_x && mouse_event->relative_x <= base->x + base->width
&& base->y <= mouse_event->relative_y && mouse_event->relative_y <= base->y + base->height
) {
button->hovering = 1;
if (event.secondary_type.mouse_type == TWR_WIDGET_EVENT_MOUSE_CLICK)
button->onclick(button->onclick_data);
} else {
button->hovering = 0;
}
}
struct twrWidgetButton new_button(int x, int y, int width, int height, char* text, char* text_font, char* text_color, char* default_color, char* hover_color, void (*onclick)(void *), void* onclick_data) {
return twrWidgetButton {
.base = {
.type = "twrWidgetButton",
.x = x,
.y = y,
.width = width,
.height = height,
.visible = 1,
.draw = twr_widget_button_draw,
.free = NULL, //nothing to free
.num_events = 2,
.event_registrations = (struct twrEventRegistration[2]){
{
.base_type = TWR_WIDGET_EVENT_MOUSE,
.secondary_type = {
.mouse_type = TWR_WIDGET_EVENT_MOUSE_CLICK
},
.callback = twr_widget_button_event
},
{
.base_type = TWR_WIDGET_EVENT_MOUSE,
.secondary_type = {
.mouse_type = TWR_WIDGET_EVENT_MOUSE_MOVE
},
.callback = twr_widget_button_event
}
}
},
.text = text,
.text_font = text_font,
.text_color = text_color,
.default_color = default_color,
.hover_color = hover_color,
.onclick = onclick,
.onclick_data = onclick_data,
.hovering = 0,
.initialized = 0,
};
}
After working a bit on how registration would work, I merged all the events into 1 enum rather than splitting mouse and keyboard events. It reduces the code a bit, but mainly, it makes it easier to keep lists of widgets registered to each event type.
Extending this post and the above posts, here's how I think widgets could work.
stateDiagram-v2
HTMLCanvas --> Screen: Events
Screen --> HTMLCanvas: Render
Screen --> Window/Tab: Events
Window/Tab --> Screen: Render
Window/Tab --> Widgets: Events
Widgets --> Window/Tab: Render
WASM_Application --> Widgets: Widget Placement/Creation
Widgets --> Button: Mouse Events
Button --> WASM_Application: Click Event
Widgets --> Text_Input: Keyboard Events
Text_Input --> WASM_Application: Text Enter Event(s)
Widgets --> Scroller: Mouse, Keyboard, and Mouse Wheel Events
Scroller --> WASM_Application: Value Change Event
With this setup, I'm assuming that the Widgets are written in C while everything above is written in TS. I think the main problem with it is if a single application can have multiple Windows/Tabs and therefore multiple Widgets. I'm not sure how much it would slow things down if every event call needed to reference a lookup table to figure out what eventID corresponds to what Widget struct.
I noticed in pong that you have some user buttons. And i noticed that the way you implemented them is the start of a classic "gadget" or "widget" or other-name library. Buttons, checkboxes, radio buttons, scroll bars, etc are the classic widgets.
For my goal of porting Perfect Sound, we may need such a library. And such a library would be useful for porting other C software. I have one I wrote a long time ago that I can dig up. It implements all the standard gadgets in C using line draws and rect fills and such. Or we could redo it in C++ (but we would want a C API as well for maximum compatibility).
A question is: should we create our own API, or just pick a classic implementation and clone the implementation? Mac, Windows, Amiga, etc.
Also, closely tied to this would be a window library - the ability to open a window, add menus, draw into it, etc. In twr-wasm world you can imagine that a Window is the stuff around a 2D drawing surface (this is also classic). And that the window is a thing the user can drag around.
The ability to implement a Window'd UI on a web browser based twr-wasm OS desktop with twr-wasm would be a differentiator, and would fit into where I want to take twr-wasm as an "OS". This is a longer conversation, but i think the next step after audio is probably to think about windows, menus, and widgets/gadgets.