Open twiddlingbits opened 2 months ago
What events should be added to the library? I'm assuming mouse events and keyboard events to start off with, but I'm not sure what else should be added.
As for the animation loop, I think it might be possible? I believe the only async function in the canvas library is load_image which doesn't require the canvas itself. So unless a request animation frame event can be lost due to the blocking, it should get called eventually without any blocks. The main problem is that it would involve all of the precomputed objects to be stored on the worker thread and for load_image to somehow transfer the image back to the worker thread.
In addition, it seems like some events can be bound both globally or "locally". For instance, mouse events can both be bound to something like a canvas and to the page itself. The only problem with binding it to something is it seems like it still gives global coordinates. There are multiple fields like layer X and layer Y but it doesn't give the actual, relative position that was click on the canvas. My solution to this was just using the true, global position and then subtracting the relative positions. There might be better methods of doing it, but I can't think of any right now. The question is, should there be variants for both "local" and global? And, should the "local" be automatically set to be relative to the bound object or should there be an option for it?
Something similar can be done for keyboard events. However, it involves setting the tabindex of the canvas element to make it so you can "focus" on it. This probably wouldn't be very useful for a pure canvas app like Pong, but it might be useful for something like the multi IO example where you have multiple canvases and terminals.
What might be helpful here is for us to spend a few minutes figuring out the high level framework we want to implement. We might want to do this in person, or on zoom with a virtual white board.
For example, i don't think we necessarily want to just map what we do directly to HTML.
For example, here is what the Amiga did (i am not proposing this). Just for illustrations. And i probably have this wrong, its been a while, but in any case:
A Screen set the bit depth, etc. A screen was always full physical screen. Games often drew directly to a screen. Windows could exist on a screen. They could be nested, resized, etc. Windows could have borders and menus and "gadgets" (like close button, scroll bar, text, etc) You could get events from a window. "Intuition events" Events would be things like key, mouse. I imagine there were also event from the widgets (like scroll bar moved), but the details are escaping me.
In a hypothetical Web Browser twr-wasm desktop, one option would be to use a tab UI (like a browser, or like vs code), and think of each tab as a "screen" in the amiga world. I think i like this.
A screen could have windows, but another option would be to not allow overlapping windows. Phones don't do that. VS Code doesn't do that. They can be confusing.
I want an app like twr-wasm Pong to be able to be a self contained app (a single .wasm file), that will run in the twr-wasm OS env. For example, i want to be able to run it from the twr-wasm shell by typing "pong" or perhaps "htttp://johnathon.io/pong".
Another option i was thinking about would to have the "desktop" have a framework, sort of like npmjs. Where you can search, find apps, etc. Then run them in another tab.
Anyway, I personally like a hierarchy approach, where events come from the bottom of the hierarchy:
Screen->Window->menu -> get menu selected events if you listen to menu Screen->Window->TextWidget->get Text events from text widget.
Of course, it can't be that simple, because you have to write the text widget, and it needs to get events from somewhere (like the window).
I also like the concept of incorporating URLS that can deep link to a particular state/UI. Like mentioned in pong.
I want to create something that allows apps, has similarities to desktop apps, is native to a networked world, and adopts common web metaphors, like links.
I believe the only async function in the canvas library is load_image which doesn't require the canvas itself.
The issue is that all of the canvas draw functions happen in the JS Main thread. This is because, in my testing, i discovered that they don't operate correctly if the JS event loop isn't running. And in twrWasmModuleAsync
, the worker thread event loop is not running when the worker thread is blocked (which if you follow what happens in a C call like twr_sleep
, it blocks on a call to an atomic wait).
Anyway, I personally like a hierarchy approach, where events come from the bottom of the hierarchy: Screen->Window->menu -> get menu selected events if you listen to menu Screen->Window->TextWidget->get Text events from text widget.
I'm assuming the hierarchy would be written either in C or C++ as part of the widget library? That way the typescript can just send the HTML events to the widget library and then the widget library can parse them and shuttle it down. That way jsEventsLib can be rather simple and barebones and everything else can be implemented natively (in C/C++) to reduce calls between TS/C.
In a hypothetical Web Browser twr-wasm desktop, one option would be to use a tab UI (like a browser, or like vs code), and think of each tab as a "screen" in the amiga world. I think i like this.
A screen could have windows, but another option would be to not allow overlapping windows. Phones don't do that. VS Code doesn't do that. They can be confusing.
I agree with using tabs, however, I'm not sure about windows. I believe the easiest way would be to have some form of tiling like VScode does (and tiling window managers in general). That way there's no overlap. However, how would resizing be done? Would each Window be a separate program or would it be a program per tab (screen) that could have multiple windows. In the case of multiple programs per screen, would there need to be some sort of rescale and redraw mechanism so they can respond to a user moving them around or resizing them? Otherwise, if it's a program per the resizing and redrawing could be left to the program itself or the libraries it's using.
I'm not sure how much of a problem this is. It depends on
Anyway, I personally like a hierarchy approach, where events come from the bottom of the hierarchy:
How much of this would be events handled in Typescript vs. C/C++? For instance, you mention the following:
Anyway, I personally like a hierarchy approach, where events come from the bottom of the hierarchy:
Screen->Window->menu -> get menu selected events if you listen to menu Screen->Window->TextWidget->get Text events from text widget.
Of course, it can't be that simple, because you have to write the text widget, and it needs to get events from somewhere (like the window).
I feel like this sort of setup would mostly be implemented on the C/C++ side. The screen/window gets events directly from JS/TS, parses them, and then passes them down as needed. For instance, screen gets raw mouse events from JS/TS, and passes them down to the relevant window(s), which could then be hooked into further systems like widgets for button clicks, text inputs, etc. Then listeners could be set at each level of the hierarchy depending on how much abstraction is needed.
In that case, should this be split into a separate issue like "Window Manager" or "Desktop Manager" so that this issue can be focused on raw HTML events?
Looking at it more, I can see how it would be better to have the events be more abstract than just raw JS events. However, I'm not sure how exactly the events should be brought out.
For instance, should events just be part of some Screen class in JS? Where a screen could do things like managing all events and when they get sent to individual programs, hand out "virtual" canvases that programs can draw on before the screen blits them to the main canvas. So in this case, it would require some way for the program to register events with a screen and request a canvas to draw upon. Then, if we had different tabs, we could have the screen class send the events to the focused tab if the program has registered to receive them.
So, maybe something like this?
enum EventTypes {
KEY_DOWN,
KEY_UP,
}
class Screen {
ctx: CanvasRenderingContext2D;
element: HTMLCanvasElement;
focused: number = 0;
virtualConsoles: { [key: number]: twrConsoleCanvas } = {};
registeredEvents: { [screen: number]: [IWasmModule|IWasmModuleAsync, { [event: number}: number }] = {};
internalSendEvent(eventType: EventTypes, ...values: number[]) {
const focusedApp = this.registeredEvents[this.focused];
if (focusedApp == undefined) return;
const eventID = focusedApp[1][eventType];
if (eventID == undefined) return;
focusedApp[0].postEvent(eventID, ...values);
}
internalSendKeyEvent(eventType: EventTypes, e: KeyboardEvent) {
internalSendEvent(eventType, keyEventToCodePoint(e));
}
constructor(element: HTMLCanvasElement) {
this.element.addEventListener("keydown", (e) => internalSendKeyEvent(EventTypes.KEY_DOWN, e));
this.element.addEventListener("keyup", (e) => internalSendKeyEvent(EventTypes.KEY_UP, e));
}
getCanvas(mod: IWasmModule|IWasmModuleAsync) {
if (!(mod.id in this.virtualConsoles)) {
this.virtualConsoles[mod.id] = /* create new twrConsoleCanvas */;
this.registeredEvents[mod.id] = [mod, {}];
}
return this.virtualConsoles[mod.id].id;
}
registerEvent(mod: IWasmModule|IWasmModuleAsync, type: EventTypes, eventID: number) {
this.registeredEvents[mod.id][1][type] = eventID;
}
}
I created a diagram of what I was thinking above. Window/Tab isn't necessarily its own class, but it's there to represent the type of interface WASM will be working with. The main idea is that each HTMLCanvas gets its own "screen" from which an application can request a window/tab. This window/tab will then have its own virtual canvas that the program can render too that is eventually merged into the parent HTMLCanvas. However, it might also be beneficial to have programs automatically assign tabs so that if you can open applications online, then they would have their own tab you could close if they don't render anything.
stateDiagram-v2
HtmlCanvas --> Screen: Events
Screen --> HtmlCanvas: Render
Screen --> Window/Tab: Events
Window/Tab --> Screen: Screen Data
WASM_Application --> Screen: Window/Tab Registration
Window/Tab --> WASM_Application: Events, Virtual Canvas
Window/Tab --> twrConsoleClass: Creation/Management
WASM_Application --> twrConsoleClass: Rendering
twrConsoleClass --> Window/Tab: Screen Data
WASM_Application --> Window/Tab: Event Registration
Prototyped more of the event section and came up with this:
tabs: { [id: FullID]: [IWasmModule|IWasmModuleAsync, ...any] } = {};
registeredEvents: Map<number, Array<Map<number, number>>> = new Map();
selectedTab: number = -1;
constructor(canvas: HTMLCanvasElement) {
// all library constructors should start with these two lines
super();
this.id=twrLibraryInstanceRegistry.register(this);
this.canvas = canvas;
const register_similar_events = (rangeStart: number, rangeEnd: number, handler: (e: any) => number[] | void) => {
for (let i = rangeStart; i <= rangeEnd; i++) {
canvas.addEventListener(EVENTS[i], (e) => {
const res = handler(e);
if (res != undefined)
this.internalSendEvent(i, ...res);
});
};
}
register_similar_events(EventTypes.KEY_DOWN, EventTypes.KEY_UP, (e: KeyboardEvent) => {
const r=keyEventToCodePoint(e); // twr-wasm utility function
if (r) {
// postEvent can only post numbers -- no translation of arguments is performed prior to making the C event callback
// See ex_append_two_strings below for an example using strings.
return [r];
}
});
register_similar_events(EventTypes.MOUSE_DOWN, EventTypes.MOUSE_MOVE, (e: MouseEvent) => {
return [e.pageX + window.scrollX, e.pageY + window.scrollY];
});
register_similar_events(EventTypes.WHEEL, EventTypes.WHEEL, (e: WheelEvent) => {
return [e.deltaX, e.deltaY, e.deltaZ, e.deltaMode];
});
}
internalSendEvent(event_type: EventTypes, ...args: number[]) {
const event_types = this.registeredEvents.get(this.selectedTab);
if (event_types == undefined) return;
const event_handlers = event_types[event_type];
for (const [i, _] of event_handlers) {
this.tabs[this.selectedTab][0].postEvent(
i,
...args
);
}
}
I'm not sure that it can go much further than this point without the ability to create/delete consoles dynamically. You mentioned here that there's currently no way to create/delete libraries from C, but I was wondering if there's a way to do it from TS outside of initialization. I looked through the code a bit and couldn't find one, but I could be missing something.
I'm assuming the hierarchy would be written either in C or C++ as part of the widget library?
I think you could do it either way (C/C++ or JavaScript). The advantage to writing more of the code in TypeScript is that the language is easier to write code in and debug. The other advantage is that if there is common state that needs to be used by two or more .wasm
modules. This is the reason i moved the consoles to typescript. So that multiple .wasm modules can write to the same console. Imagine a twr-wasm module that is a shell. Imagine that echo
is entered, and the shell launches a new .wasm
module called echo.wasm
. And echo.wasm wants to output text to the console that launched it.
There is also the question of allowing twr-wasm code to have separate processes. I am not 100% sure yet, but one way to implement threads/processes with twr-wam would be to allow a .wasm module to launch a new .wasm file as a new process. If this is the case, would you ever want to allow a window menu to have event callbacks in separate .wasm processes? It might be an edge case not worth worrying about.
And more importantly, I am thinking that we should allow twr-wasm to be used 100% with TypeScript. Ie, a developer can use twr-wasm to write an app that uses consoles, windows, etc. And then when i turn it into an os, the os apps can be written in TS or C. This would make it much more popular. So that "echo" cmd i just mentioned, could be written in TypeScript.
However, how would resizing be done?
You would have window resize events. For example, the following types of window events could be listened to: window closed, menu item selected, resize. And possibly more edge case events like: moved, exposed/paint (was behind something, but now is in front).
I'm not sure that it can go much further than this point without the ability to create/delete consoles dynamically. You mentioned https://github.com/twiddlingbits/twr-wasm/issues/38#issuecomment-2381651254 that there's currently no way to create/delete libraries from C, but I was wondering if there's a way to do it from TS outside of initialization. I looked through the code a bit and couldn't find one, but I could be missing something.
I think you can? See twrmodutil.ts
io.stdio=new twrConsoleDiv(eiodiv, {foreColor: opts.forecolor, backColor: opts.backcolor, fontSize: opts.fontsize});
Regarding adding a C function to load a library (or a TypeScript method to load a library dynamically). I had a lot of trouble getting dynamic imports to work -- which i do with isCommonCode
. See the line: const libMod=await import(this.libSourcePath);
in twrLibary.ts
. Also see the isCommonCode
limitations documented in the library docs - https://twiddlingbits.dev/docsite/api/api-ts-library/. IIRC, the issue was around supplying the correct path. Maybe i solved this, i am not sure. I'd have to spend time digging back into it and paging those issues back into my memory. but it think this might complicate a C API that just took a name akaloadLibrary("audio")
I haven't fully read/understood your code examples yet. But I have the following proposal.
Note this below is different than what i suggested before -- a separate event library. In this new scenario, events are added to existing and new libraries. In this new scenario, there still might be a global event library (or system event library), but it would be for events that have nothing to do with the onscreen display. For example, file system, timer, hardware, etc. But these are probably in their own libraries (like i have in the timer library today).
I think the following classes make sense (some exists, like canvas).:
Then in the future, tabs/screens could be implemented. They could also contain a d2d canvas class (and almost nothing else). In this scenario, an app could draw directly to a screen's d2d canvas console, w/o using a window. Or alternately, you could not allow this. Make a screen only a container for a windows.
You then start to think about how to implement icons or other items on a screen besides an app window (imagine the scenario where the twr-wasm OS uses a screen like a PC desktop). In this case, you could allow "borderless" windows. This is pretty common in the desktop world. A window can have a title bar, a close widget, a border, etc all as options. W/o them a window is just a square that can be drawn into.
Then in the future, a tab could map directly to a screen. And an OS "gui shell" could implement icons for apps, window dragging, etc. Or a CLI shell could allow you to launch apps, and move windows around, etc.
A window could be dragged from one tab to another. So the .wasm
module is isn't tied to a particular screen or window. wasm modules just redners to them, gets event from them, can manipulates them (resize, etc).
An app could open a window, on the "default" screen, or it could open its own screen, and then open window on that screen. Or it could enumerate screens. These are all more advanced cases -- one would start with one screen/one tab to simplify things to start out.
I was imagining that for all this to work, we would probably add a new class TwrWasmOS
.
twrWasmOs.boot(autorunApp | CLIShell | GUIShell)
autorunapp would be a .wasm
module. Shells would be the default registered shell, which would be a .wasm module.
It would have a way to register executables that would be hosted in other domains (like twrwasmOsCommands.twiddlingbits.org)
It would have an (optional) window that allowed the user to search or run commands that were registered in our registry (akin to npmjs)
Instead of a command/app being a .wasm file, it might be a .zip (or similar) file that contained assets other than .wasm.
See twr-wasm\examples\tests-user
for the very early beginnings of what a cli shell might look like. https://twiddlingbits.dev/examples/dist/tests-user/index.html
So as you think about the APIs, it is helpful to think that someday, the GUISHell would be implemented using them -- although probably with the TypeScript versions of the APIs. One of the next APIs i was going to add was loadWasmModule
or similar, so that the CLI shell i started could load a and run a .wasm file.
stuff like that.
For the canvas, how would events be handled? For instance, would the canvas be used for both "physical" (as in on-screen) and "virtual" canvases or just physical? The benefit of having it work for virtual canvases would be that an application can expect a canvas and work whether it's being handled as a window (possibly using a virtual canvas) or just drawing directly to the main canvas. The main problem would be figuring out how to link it to its parent canvas in such a way that a window class could prevent sending events if something is off-screen.
For windows, would it be implemented directly around a canvas? So if you just ran a "window" it would run directly on the canvas itself, but it could also use virtual canvases with something like a screen to interact with them? In that case, would it be best to have events handled with screens like I mentioned above so that it works regardless of whether or not it's embedded in a screen?
I do have one concern for this implementation for windows though. If a screen contains windows that contain apps, at what point does the inderection become too large? For instance, it sounds like you would need, at minimum, 2 copies. One from the wasm app to the window, and another one from the window to the screen.
I think you can? See twrmodutil.ts io.stdio=new twrConsoleDiv(eiodiv, {foreColor: opts.forecolor, backColor: opts.backcolor, fontSize: opts.fontsize});
The only problem is that io isn't exposed in IWasmModule or IWasmModuleAsync.
Though, it seems like there are only two steps to register a library:
So, I think a simple function could be added for registering new libraries through TS, something like:
registerLibrary(name: string, library: IConsole) {
if (name in this.io) throw new Error(`registerLibrary: tried to register the ${name} library name twice!`);
io[name] = library;
ioNamesToID[name] = io[name].id;
}
So a screen class (or something similar) could generate something like "window-1", "window-2", etc. for names and register "virtual" canvases.
Also, I'm just going to put down the definitions (as I understand them) to make sure we're on the same page: Canvas: HTMLCanvas that is either on the page (screen) or virtual (off-screen) Tab: A single application or a collection of windows (I think it's the second) Window: A wrapper around canvas that adds events for resizing, closing, etc.
From what you talked about, this is what I'm thinking of for events
enum CanvasEventType {
KEY_UP,
KEY_DOWN,
MOUSE_MOVE,
MOUSE_CLICK,
....
}
interface CanvasEvents {
handleKeyEvent: (event: CanvasEventType, key: number) => void;
handleMouseEvent: (event: CanvasEventType, x: number, y: number) => void;
...
}
function generateKeyHandler(handler: CanvasEvents, event: CanvasEventType) {
return (e: KeyboardEvent) => {
//convert event to keycode
handler.handleKeyEvent(event, keyCode);
}
}
function registerCanvasEvents(handler: CanvasEvents, canvas: HTMLCanvas) {
//register events directly to on-screen canvas
canvas.addEventListener('keydown', generateKeyHandler(handler, CanvasEventType.KEY_DOWN);
...
}
class twrConCanvas ... implements CanvasEvents {
//keep old functions from twrConCanvas
//add functions to register for events which are called by the event handlers defined in CanvasEvents
constructor(canvas: HTMLCanvas, selfRegisterEvents: boolean = true) {
if (selfRegisterEvents) registerCanvasEvents(this, canvas);
}
}
class twrConWindow implements CanvasEvents {
//same idea as twrConCanvas
}
With an interface like CanvasEvents, a program could directly attach any level to the canvas and the upper level should be able to pass events down as needed. This way it doesn't matter if a program is connected to a virtual canvas or directly to an on-screen canvas, it will work the same.
Something akin to your jsEventsLib was on my to do list. That is, the ability to ask for JS UI Events from C w/o having to write JS code to send each event.
Please add it as a built-in library. Something like "twrLibUIEvents".
regarding registerAnimationLoop: The way events in twrWasmModuleAsync are implemented is that they send a "message" to the worker thread. (See twrmodasync.ts):
I am wondering if there is a way to make an animation loop faster. For example, implement it in code that is linked in and runs in the worker thread (with something like
isCommonCode
option). I think the answer is "no". When i looked into this in the past -- with the D2D API, i discovered that even though there is a version of the Canvas API that can run in worker threads, it seems to require that the JS event loop run. But our worker thread often blocks. This is why i use theeventQueue
that I wrote -- the Atomic operations work even if the JS event loop is blocking. So i think the likely result here is that we just document this in the jsEventsLib doc.