SimHacker / MicropolisCore

SimCity/Micropolis C++ Core
GNU General Public License v3.0
90 stars 6 forks source link

Rendering sprites #4

Open eliot-akira opened 3 months ago

eliot-akira commented 3 months ago

With the JavaScript port MicropolisJS, I remember I found and fixed a bug in the scanning (?) logic that was preventing trains from being generated.

It looks like the same bug exists in MicropolisEngine - or rather, the port faithfully translated the bug into JS. As far as I can see there are no trains running in any of the cities.

I forgot specifically which part of the code was causing the issue, but it was something small, like a single conditional statement. Debugging the C++ code is somewhat challenging (well for one I'm not so familiar with the language) - I think I'll temporarily put a JS callback that receives values and outputs to console log.

Anyway, I'll keep looking, tracing the logic - will make a pull request if I solve it.

SimHacker commented 3 months ago

Good catch! I’ll dig through the old code to see how it generates trains. Could you please open an issue so it won’t fall through the cracks if I forget?

I’m re-reading the Snap! Manual and extensions documentation and code, to figure out how to integrate it with Micropolis.

Right now it’s just putting a big draggable snap window over the Micropolis view which is full screen, but they need to be the other way around, with the Micropolis view embedded in the Snap! stage.

From reading the code it turns out there is a way to do that, which the video camera and the map features use, that give Snap a canvas to draw as the background that constantly updates.

That means the Micropolis view will have to be off screen, and Snap will implement the mouse tracking.

But I still want to be able to run Micropolis without Snap of course, so I’m refactoring the big blob of code in MicropolisView.svelte into independent TypeScript class and svelte components like MicropolisSimulator.ts, MicropolisView.svelte, MicropolisCallbackLog.ts, so we can make different implementations for MicropolisViewSnap.svelte and MicropolisCallbackSnap.ts.

-Don

eliot-akira commented 3 months ago

I went spelunking into the codebase, added a method Micropolis::log() to send a string to JS console log, and traced the logic of railroads and trains. It was interesting to inspect values in real time as the engine went through its cycles. A log method like this seems handy for sending arbitrary values, possibly even JSON string.

As far as I could tell, the train logic is working correctly. It's just that the JS side is not rendering any sprites yet. So there's no train, helicopter, boat, etc.

So I'll change the title of this issue, as a reminder about sprite render.

eliot-akira commented 3 months ago

That makes sense about putting the Micropolis canvas into Snap, and not the other way around. It sounds like it may involve an overhaul of the web interface, to use Snap as the foundation.

By the way, I was able to extract the TileRenderer and use it independently, including mouse navigation, in a new empty project with a static page. During the process, I found that WebGLTileRenderer works but not CanvasTileRenderer with the same interface, maybe due to some change in the data schema. I like how the canvas rendering logic is much simpler than WebGL, so when I have time I plan to study it deeper and see if I can get it working.

SimHacker commented 3 months ago

Snap has some hooks to enable drawing a real time animated stage background, for video and maps, that I think would work.

https://github.com/jmoenig/Snap/blob/master/src/objects.js#L8638

// projection layer - for video, maps, 3D extensions etc., transient
this.projectionSource = null; // offscreen DOM element for video, maps, 3D
this.getProjectionImage = null; // function to return a blittable image
this.stopProjectionSource = null; // function to turn off video stream etc.
this.continuousProjection = false; // turn ON for video
this.projectionCanvas = null;
this.projectionTransparency = 50;

https://github.com/jmoenig/Snap/blob/master/src/objects.js#L8774

// projection layer (e.g. webcam)
if (this.projectionSource) {
    ctx.globalAlpha = 1 - (this.projectionTransparency / 100);
    ctx.drawImage(
        this.projectionLayer(),
        sl / this.scale,
        st / this.scale,
        ws,
        hs,
        clipped.left() / this.scale,
        clipped.top() / this.scale,
        ws,
        hs
    );
    this.version = Date.now(); // update watcher icons
}

Here's the beef:

https://github.com/jmoenig/Snap/blob/master/src/objects.js#L8862

// StageMorph video capture

StageMorph.prototype.startVideo = function() {
    var myself = this;

    function noCameraSupport() {
        var dialog = new DialogBoxMorph();
        dialog.inform(
            localize('Camera not supported'),
            localize('Please make sure your web browser is up to date\n' +
                'and your camera is properly configured. \n\n' +
                'Some browsers also require you to access Snap!\n' +
                'through HTTPS to use the camera.\n\n' +
                'Please replace the "http://" part of the address\n' +
                'in your browser by "https://" and try again.'),
            this.world
        );
        dialog.fixLayout();
        if (myself.projectionSource) {
            myself.projectionSource.remove();
            myself.projectionSource = null;
        }
    }
    if (this.projectionSource) { // video capture has already been started
        return;
    }

    this.projectionSource = document.createElement('video');
    this.projectionSource.width = this.dimensions.x;
    this.projectionSource.height = this.dimensions.y;
    this.projectionSource.hidden = true;
    document.body.appendChild(this.projectionSource);
    if (!this.videoMotion) {
        this.videoMotion = new VideoMotion(
            this.dimensions.x,
            this.dimensions.y
        );
    }
    if (navigator.mediaDevices && navigator.mediaDevices.getUserMedia) {
        navigator.mediaDevices.getUserMedia({ video: true })
            .then(function(stream) {
                myself.getProjectionImage = myself.getVideoImage;
                myself.stopProjectionSource = myself.stopVideo;
                myself.continuousProjection = true;
                myself.projectionSource.srcObject = stream;
                myself.projectionSource.play().catch(noCameraSupport);
                myself.projectionSource.stream = stream;
            })
            .catch(noCameraSupport);
    }
};

StageMorph.prototype.getVideoImage = function () {
    return this.projectionSource;
};

StageMorph.prototype.stopVideo = function() {
    if (this.projectionSource && this.projectionSource.stream) {
        this.projectionSource.stream.getTracks().forEach(track =>
            track.stop()
        );
    }
    this.videoMotion = null;
};

StageMorph.prototype.stopProjection = function () {
    if (this.projectionSource) {
        this.stopProjectionSource();
        this.projectionSource.remove();
        this.projectionSource = null;
        this.continuousProjection = false;
    }
    this.clearProjectionLayer();
};

StageMorph.prototype.projectionSnap = function (target) {
    var snap = newCanvas(this.dimensions, true),
        ctx = snap.getContext('2d');
    ctx.drawImage(this.projectionLayer(), 0, 0);
    return new Costume(snap, (target || this).newCostumeName(localize('snap')));
};
SimHacker commented 3 months ago

Ah yes, only the WebGLTileRenderer should be trusted to work, since that's the one I'm actually using and adding new features to, and the others were earlier experiments that are falling behind.

The CanvasTileRenderer currently draws terribly embarrassing fuzzy bloody seams around the tiles because it clips and draws each individual tile, and the neighboring tiles bleed through. so I don't think that approach is viable. But it did help me sort out the api and coordinate system transformations and tracking stuff (the unfortunate fuzz even made it easier to see the tile edges, which I needed in order to debug). It was definitely easier to get working than the WebGL shader. It really helped me to run the Canvas and WebGL renderers side by side, responding to the same mouse events, to check and compare them.

The WebGL implementation was frustratingly broken at first, only showing half the tiles in a triangle, since (I eventually discovered) I was reusing a couple of vertices in the two triangles. I'm used to WebGL either not working, or fully working, but this 50% diagonal triangle of working was ridiculous! Somehow I broke one of the two triangles for a reason I still can't fathom, but once I put in duplicate vertices it suddenly started working perfectly!

So the CanvasTileRenderer definitely is useful as a non-GPU reference implementation, but using a different approach than clipping and bit-blitting each individual tile would work better, like simply iterating over all the screen pixels and sampling from the appropriate tile like the WebGL shader does. And it will support the most number of devices, so it deserves to be brought and kept up to date.

My retrocomputing instincts told me it would be more efficient for it to call native code to perform the bit-blits, but in current reality JavaScript is way fast enough to just loop over the pixels without paying the price for ping-ponging between the interpreter and native code for each tile. You have to weigh the cost of getting back and forth between JavaScript and native bit-blit primitives, versus just writing the whole custom drawing loop in JavaScript and letting the JITter sort it out. The only way to know is to measure, but in this case the answer is obvious because scaling and clipping tiles is butt-ugly.

The WebGPUTileRenderer one is more ambitious and forward looking, and definitely worth a revisit once the paint dries on the WebGL version. WebGPU is just so cool, and I want to do more cellular automata and image processing stuff with it.

There's no practical requirement to use WebGPU instead of WebGL for something as simple as rendering tiles, but in general, WebGPU will make it easier to do more advanced things and take better advantage of the hardware, without all the headaches of WebGL whose design goes back to 1982 when graphics accelerators were a hell of a lot different than they are today. So I definitely plan on bring that up to date at some point.

The next steps for the tile renderer are: generalizing tilesets and tile maps, so you can allocate any number of tile sets at any positions and layouts within tile texture atlases, support different resolutions (even tile sets with different resolution tiles, including 1x1 pixel pure color tiles, etc), different enumerations (remap the tile indices to different locations), tile animation (have the tile renderer automatically switch between animated tiles), tile overrides (the blinking lightning bolt symbol that's displayed at the center of an unpowered zone), transparency, blending, special effects like blinking, highlighting, cursors, arbitrarily sized tile sets that can pack into arbitrary parts of tile textures, supporting plug-ins with custom tile sets and renderers, dynamically allocating and packing tiles to support many plug-ins and layers at once.

For example, the blinking lightning bolt symbol can be implemented as an overlay tile layer, using the main tile set with a 1x1 wrapping map selecting the lightning bolt tile, enabled during even seconds for each tile whose ZONEBIT is true and PWRBIT bit is false. The layer's shader will have a special conditional feature to enable/disable during odd seconds depending on the center bit of each tile.

So each layer of tiles could have its own custom JavaScript code and shader snippets to pass it custom parameters, and enable/disable and modify which tile it selects and how and where it draws it.

It would be cool to factor out these application specific special features into little shader snippets that are spliced into the shader string, so a tile map could include both JavaScript and shader logic for special rendering effects.

Then the overall map views with data overlays (and the Dynamic Zone Finder renderer, etc) could be implemented as custom layers, each with their own shader snippets, plus some metadata that says which inputs they depend on. We could also could combine all the currently enabled custom layers into a single one-pass shader, instead of applying them one after the other.

It will be useful for all kinds of games and cellular automata and info visionalization, even SimCity 2000 like iso tiles!

SimHacker commented 3 months ago

Of course sprite and cursor layers are just a common edge case of tile layers (offset/scaled/rotated 1x1 or even bigger tile maps). It will be general enough and have all the features to support Micropolis sprite animation (and all other kinds of sprites, like rendering user defined Snap! sprites), as additional layers. So tile layers with their own "hot spot" and transformation would be mighty useful for sprites and cursors.

One thing that WebGPU has over WebGL (although I don't know their specific limits for sure) is the ability to pass in more data structures and texture layers than OpenGL supports. They both have limits, which may change over time, but in general WebGPU is a hell of a lot more flexible and future proof, you don't have to stuff data structures into textures and hacks like that.

A smart dynamic rendering layer compiler could conglomerate as many layers as it could into each shader, respecting the texture and data input limits, considering which tilesets and data layers and parameters they depended on and share, to produce a series of optimized shader passes.

SimHacker commented 3 months ago

Yeah debugging the C++ code compiled into WebAssembly is practically impossible, unless you like to look at raw WebAssembly instructions, in which case it's so much fun! ;) So for people without so much time on their hands, printf debugging is the way to go, since it recompiles in only a few seconds!

I still have to figure out how to configure Visual Studio Code to automatically recompile the C++ code and hot-redeploy the web app whenever I change a line of C++, but maybe it's better to have a little buffering in there since C++ is such a big pain in the ass that I'd hate to break the running app every time I change one character.

The WebAssembly runtime has a stdout/stderr reporting interface that you can hook into in JavaScript when you make the web module. In the latest overhaul where I factored out MicropolisSimulator and TileView (in preparation for integrating with Snap!), I defined some stdout and stderr hooks that prefix the string with the name of the module so you know where it came from.

https://github.com/SimHacker/MicropolisCore/blob/main/micropolis/src/lib/MicropolisSimulator.ts#L14

micropolisengine = {
    print: (message: string) => console.log("micropolisengine:", message),
    printErr: (message: string) => console.error("micropolisengine: ERROR: ", message),
    setStatus: (status: string) => console.log("micropolisengine: initModule: status:", status),
    locateFile: (path: string, prefix: string) => {
      console.log("micropolisengine: initModule: locateFile:", "prefix:", prefix, "path:", path);
      return prefix + path;
    },
    onRuntimeInitialized: () => console.log("micropolisengine: onRuntimeInitialized:"),
  };

  await initModule(micropolisengine);
SimHacker commented 3 months ago

On the subject of inspecting the internal state of the simulator, I want to make an optional way to dump that ALL out as JSON, including all timestamped editing commands, the 2d map overlay layers, even the whole save file (which is tiny by modern standards, and will easily compress even tinier), so you can rewind time to inspect the simulator state and replay it from any point.

Then we can can send that stream of telemetry to a Promethius time series database at regular intervals (or every step for debugging), and make Grafana dashboards and panels that query and visualize it.

That will be GREAT for debugging, as well as playing the game itself, supporting multiple players, and using it in educational settings (giving much deeper insights than just the traditional history and evaluation dialogs and RCI gauge do, teaching students to analyze data, make their own Grafana dashboards and Promethius queries, to measure the effects and consequences on their own cities of experiments and plug-ins they can script in Snap!).

The simulator is deterministic, so you can write out the initial save file and timestamped edit commands (every time the universe branches), or even a complete save file every edit plus every month, which makes it easy for new players to join and fast-forward to synchronize.

That's the basis for a synchronous timeline or even atemporal multiverse (rewinding and branching time like Braid) multi-player game!

https://prometheus.io/

https://grafana.com/

https://grafana.com/docs/grafana/latest/fundamentals/timeseries/

Rather than implementing a json export feature in C++ (since C++ doesn't have convenient built-in JSON support), I think it's better to just systematically expose EVERYTHING about the simulator state to JavaScript (only ignoring a few things that really don't matter), and then writing the code to pull it out into a JSON+binary blob, then stream that to the time series database, all in JavaScript. That would serve as a good test that all the simulator state was exposed correctly. There are still a bunch of things missing, and the emscripten.cpp wrapper file needs to be better organizes so it's easier to tell what's been wrapped and what hasn't.

SimHacker commented 3 months ago

It would also be cool to get the WASM simulator running in the node server, so, for example, the server can have an endpoint that will return a save file at any point in time, by reconstructing it from the most recent save file plus all simulation steps and edits.

There are a lot of other reasons to run the simulator on the web server too (like the sheer beauty of the symmetry), but that's one example. The old Python version of Micropolis centrally ran multiple instances of the C++ simulation on the Python web server, and any number of web browser / Flash clients could connect to them (not actually running the simulation, just viewing the tiles, simulator state, telemetry and chat messages). But other obligations called, and the technology went stale before I had a chance to develop a full blown multi player interface.

Now that the simulator runs just fine in the browser, a better approach is running it in synchronized lockstep in all the clients (like how Caffeine uses Deno/WebRTC, or the Croquet project, or The Sims Online even), just broadcasting timestamped editing and chatting and content sharing messages to all other clients, instead of running it on the server. But running the simulation on the server is still useful for video streaming and mobile interfaces, a user created content and save file discovery and sharing site, and stuff like that.

https://observablehq.com/@ccrraaiigg/caffeine

https://en.wikipedia.org/wiki/Croquet_Project