flowtsohg / mdx-m3-viewer

A WebGL viewer for MDX and M3 files used by the games Warcraft 3 and Starcraft 2 respectively.
MIT License
132 stars 47 forks source link

How do you get an arbitrary model to walk around with animation? #70

Closed arcman7 closed 3 years ago

arcman7 commented 3 years ago

Demo video

You can see in this video, the sheep models just float from one location to the next without any sort of walking animation. The code that updates their positions looks like this:

function walkSheepToNewLocations(directions, delta) {
      for (let i = 0; i < sEnv.num_sheep; i++) {
          sheepModelInstances[i].setSequence(sheepWalkSeq, true) // using that boolean extra arg introduced in PR https://github.com/flowtsohg/mdx-m3-viewer/pull/69
          sheepModelInstances[i].move([directions[i][0] * delta, directions[i][1] * delta, 0] )
      }
}

What's weird is I never came across this issue when animating a single Grunt model walking. I would merely set the sequence to the walk value and then start calling gruntModelInstance.move([delta_x, delta_y, 0]).

With the sheep it doesn't matter if I remove sheepModelInstances[i].setSequence(sheepWalkSeq, true) from the for-loop or not, they still just float around.

flowtsohg commented 3 years ago

I don't think adding a boolean to the MDX handler is the correct solution - this is where the map viewer just doesn't give you what you want, so you'll have to make it do what you want. For instance, if you want game-like functionality, you should probably have some basic implementation of units that have their own state, like the current order, and you base the animations on it. This goes in the face of the existing automatic "just pick a stand animation" which is done here. The more game-like functionality you want, you might realize it's not so easy to do, because this repo was designed to be a model viewer, not wc3, it's hard to do things that should be easy in the context of a game, but if you add some cool stuff I wouldn't mind a PR 😛 You can look at https://github.com/Retera/WarsmashModEngine/ to see how Retera does game-related logic if you are interested. .

arcman7 commented 3 years ago

Point me in the right direction and I'll do my best to act on it -

For instance, if you want game-like functionality, you should probably have some basic implementation of units that have their own state, like the current order, and you base the animations on it.

What do you mean by the current order?

My current implementation for adding a grunt unit looks like this:

async function addGruntUnit(pos = [0, 0, 0], rotation = [0,0,0,1]) {
  const model = window.gruntModel || await getGruntModel(false, false);
  const mockUnitInfo = { "location": [0,0,-1000],"rotation":rotation, "angle":rotation, "player":0,"scale":[1,1,1] };
  viewer.map.units.push(new Unit(viewer, model, {'comment(s)': 'Grunt'}, mockUnitInfo));
  let instance = model.addInstance();
  viewer.worldScene.addInstance(instance);
  instance.setLocation(pos);
  createPhysicsCylinder(25, 110, { kinematic: true, mdxM3Obj: instance }); // I know, don't judge
  return instance
}

class Unit {
  constructor(map, model, row, unit) {
    let instance = model.addInstance();
    instance.move(unit.location);
    instance.setRotation(unit.rotation);
    instance.scale(unit.scale);
    instance.setTeamColor(unit.player);
    instance.setScene(map.worldScene);
    if (row && row.moveHeight) {
      const heapZ = vec3.create();
      heapZ[2] = row.moveHeight;
      instance.move(heapZ);
      instance.setVertexColor([row.red / 255, row.green / 255, row.blue / 255, 1]);
      instance.uniformScale(row.modelScale);
    }
    this.instance = instance;
    this.row = row;
  }
}

I'd love to put up a gaming "viewer" or some other module that makes sense in the context of your repo here.

flowtsohg commented 3 years ago

Consider a unit in a wc3 - it's an object with a lot of state, like a position, facing, scale, health, mana, attack parameters, defense parameters, etc. etc. etc., Some of these things are somewhat incorporated into the viewer's own instances, but most aren't related directly to the viewer. For example if you order a unit to move somewhere, you need to calculate the pathing needed, and ensure that this unit has its order, like "move" "smart" "attack" whatever, and it has the pathing needed, maybe also a target unit, and so on, and you base the current animation on the unit state. There are kinda endless details, but you can probably get something basic to work without too many issues, especially if you look at Warsmash, since it's basically a mish mesh of this repo, the HiveWE repo, and Reteras own modifications and additions.

I thought about adding some basic things like units and running the map script to make the map viewer more interactive, but ultimately the fact that it runs so slow took away my motivation from working on it at all. Years spent on optimizing this code and realizing that it will just NEVER run fast in JS kinda sucks.

arcman7 commented 3 years ago

I'm looking at their repo to get a sense of what they did right now.

Years spent on optimizing this code and realizing that it will just NEVER run fast in JS kinda sucks.

What's your idea of slow? How many simultaneously moving units do you think could be rendered and still keep a decent frame rate?

flowtsohg commented 3 years ago

Slow as in you have a bunch of units and particles, not unlike any regular wc3 scenario while playing a map, and JS simply cannot hold 60FPS, and that's with only the graphics. Now on top of that add the Lua VM, add units with their orders and pathing and constant height tests, and so many more things, and it's just not going to hold.

I used to think Java is slow, but Retera's more-or-less copy of older worse code of mine is easily getting over 200FPS with scenarios the JS code can't get 30, and HiveWE is faster still since it's C++.

I spent countless hours optimizing this repo with many different types of optimizations, many of which no longer exist because they didn't help and made the code needlessly complex. At the end of the day, I don't think it is possible to make real world JS code fast, simply because of its memory model. No proper control over memory means that pretty much every access to every node in a skeleton, to...pretty much every single object in JS, will ultimately be a cache miss. The optimizing engines in JS VMs probably improve this a bit, but there's just no scalability here when you want something like a game engine. Just as an example, if you look at the Node/SkeletalNodes classes, I inlined there a bunch of simple vector operations, because that made a noticeable improvement in performance. That is absurd, and there's no way to get fast code like this.

arcman7 commented 3 years ago

I believe a solution is currently being developed by Google (and every major web browser) for this specific class of problem: https://youtu.be/K2JzIUIHIhc?t=1081

Web GPU for chrome can be used currently on chrome canary

From there you need to enable web GPU: chrome://flags/#enable-unsafe-webgpu image

Github / more info Specificationn

Getting started with web gpu

Am I wrong in thinking this solves the majority of the performance issues present in the mdx-m3-viewer codebase? Granted it will take a bit of work to get this project fully switched over.

arcman7 commented 3 years ago

Sorry I hope that wasn't presumptious, just got really excited when I saw this tech :D

flowtsohg commented 3 years ago

I thought in the past to convert the code to WASM, and I did get hyped about WebGPU when it was announced a long time ago, but will it matter much to this code base? I don't know. Either way I doubt it's worth the effort at this point - if you want legit performance, the browser is not the right target platform. I also don't have much time to work on code for the time being due to life stuff. :(

arcman7 commented 3 years ago

Well that's fair. It would be a lot of work. I'm gonna try to see if I can get some of these models exported into a .babylon file format and try them out in their webGPU demo.

arcman7 commented 3 years ago

but will it matter much to this code base? I don't know.

I've got one data point for you-

Map - https://easyupload.io/kwcf1j

             cells      instances     particles       fps
web GPU       14         684              ~1500        60
web GL        12         558              ~1400        32.5

image image

This is your own rendering code, unaltered, seeing an immediate benefit of ~double fps.

flowtsohg commented 3 years ago

I don't understand, if it's my code how does it run on WebGPU? 🤔

arcman7 commented 3 years ago

I don't understand, if it's my code how does it run on WebGPU? 🤔

I can't recall exactly how the API is set up, but I think it's meant to be backwards compatible. And in some cases should just start working - https://www.construct.net/en/blogs/ashleys-blog-2/webgl-webgpu-construct-1519

To follow up on this, I'll disable the unsafe GPU flag and try it again. This will eliminate the possibility that the performance gains are simply the canary chrome browser being that much more optimized for general rendering operations.

arcman7 commented 3 years ago

Can confirm, disabling the GPU with the flag chrome://flags/#enable-unsafe-webgpu (using this flag provides a quick way to select disable/enable) brings the fps down to 32.5 on chrome canary.

flowtsohg commented 3 years ago

So, I still don't understand if this is something I can run and how.

I don't doubt WebGPU can have better rendering performance than WebGL if the code was designed for it, and maybe it already runs somewhat faster also with a translation layer, although I don't quite understand what Construct is.

But, and this is an important but, this won't change the update stuff much. Get a bunch of units visible, and performance is going to be bad due to node updates and such things (even more so for Reforged models with their absurd amount of nodes). WASM might improve that, but I somewhat doubt it will be that effective (and again, require A LOT of work).

arcman7 commented 3 years ago

But, and this is an important but, this won't change the update stuff much. Get a bunch of units visible, and performance is going to be bad due to node updates and such things (even more so for Reforged models with their absurd amount of nodes).

Yes, I agree with your reasoning and to your point, I did a test wherein both canary and regular chrome I used that same map, zoomed as high as I could while still keeping everything in view (maxing the particle count), and watched the fps drop; 22 for canary 17 for regular.

When I have some time today I'm gonna read a bit more about webGPU to try and figure out if there's a viable path to an outcome that minimizes the object management done by JS.

flowtsohg commented 3 years ago

Particles could theoretically be sped up in two ways.

First, it needs to have a C-like memory model, an array of structs. This is possible in JS by having a big shared buffer, where every consecutive N numbers represent a particle, and when moving a particle (e.g. a particle was born or died), rather than moving an object, the numbers need to move, much like a C++ move operation (or a simple C memcpy if you want to get to the bottom of it).

Second is to run the particles entirely in the shader. This involves feedback transform with WebGL2 or whatever equivalent WebGPU offers. The way it works is you store the shared buffer in a texture instead (e.g. texture buffer), and then you render this texture, with the output data being the updated particle data, which is then written to a framebuffer with a texture attached to it. The next frame the input and output textures are swapped, and so on. How you track particles that died, know where to add new particles in the buffer, and so on, I didn't think about enough to say off the top of my head.

There are also simple instances, for which I did multiple experiments in the past, animation queries can be O(1) instead of O(n) which is a couple of % maybe, and many more things.

Yes, this code can become faster. But run the same code in C++ (HiveWE), or even in Java (Warsmash), and it just inherently works so much faster. At some point fighting the VM just to get some more juice in an ever losing battle becomes somewhat pointless. That point pretty much became node updates. Render 100 Mountain Kings and see how much effort is spent on updating the nodes. And yes, 100 Mountain Kings is a bit odd, but 100 visible units is not even a lot for WC3, and it barely nudges it on the CPU side (it did use to have serious issues on the GPU side because it was old as heck, I think it's better since Reforged).

Compare the following images, one runs on the web and shows a small part of a map, the other from HiveWE rendering the same map in its entirety (it has TONS of objects and particles). I don't quite know what HiveWE renders or not, but even if we disregard all emitted objects and whatnot...the frame times speak for themselves (and this is without HiveWE going through years of optimization experiments). I also bothered showing my CPU usage, because the poor performance isn't the CPU dying from too much work. Rather stuff is very inefficient. My guess is, and was for a long time now, that this is because of cache misses. The JS memory model is just terrible for data oriented stuff like graphics, and essentially every access to anything can be a cache miss, which is horrible for performance when you are accessing tens and hundreds of thousands of objects (again mostly for the node updates and particles). Really, when it comes to WASM, my question wouldn't be if it can run some math op a bit faster, but if its memory model isn't terrible, because that is magnitudes of difference in performance in hot code paths. The compiler does help here a lot. For instance making particles use move-like operations and be a big shared buffer would be handled automatically by the compiler. But when it comes to the actual WASM runtime, I have no clue.

Untitled

unknown

arcman7 commented 3 years ago

Second is to run the particles entirely in the shader. This involves feedback transform with WebGL2 or whatever equivalent WebGPU offers. The way it works is you store the shared buffer in a texture instead (e.g. texture buffer), and then you render this texture, with the output data being the updated particle data, which is then written to a framebuffer with a texture attached to it. The next frame the input and output textures are swapped, and so on. How you track particles that died, know where to add new particles in the buffer, and so on, I didn't think about enough to say off the top of my head.

I don't really know enough yet at a low level how these models are rendered.

Take for example /src/parsers/mdlx/layer.ts

It differs from some of the other layers.ts files in that it has fresnel parameters. How does this directly get used in the rendering of a model and then put into a shared buffer as a texture?

I mean in general, are models composed of various layers? And is each layer type different for the different file types?

arcman7 commented 3 years ago

Sorry, I've been reading through the files, and I still don't get the high level / low level concepts at play here. All I know at this point is that you write very clean code lol.

Trying to get a clear understanding of what steps javascript is involved in, otherwise, it's hard to know what the trade-offs are in your two optimization approaches.

flowtsohg commented 3 years ago

Hmm, that's somewhat of a big question :P

There are meshes (geosets) with referenced materials.

Materials in TFT/SD work in layers sort of like how you'd have layers in 2D painting software, but the 3D equivalent. For example, to get team colors, the first layer is the solid team color, so you get a fully red rendered mesh. Then the next layer holds the actual mesh diffuse texture, with alpha blending, such that where the texture is transparent, you see the red below it. This is extended to more combinations and blending operations that are supported. As you render the same mesh multiple times, once for each layer, you get the final result.

Materials in HD always have a predefined structure with 6 layers with specific meanings, that are all used for one draw command (which makes them simpler and faster, but support less exotic rendering).

Per-layer things like fresnel are used when drawing the layers, either as GL settings, or inputs to the shaders (although fresnel specifically is an HD thing I didn't bother implementing).

Particles are quite different. They hold only the per-particle information they need, like a location, and are updated globally by the scene (since particles need to update also if their instances don't get updated). When rendering a specific emitter, all of its particles are iterated, the needed data is copied to one shared buffer, and this buffer is then used by the shader to render all of them. If the design was different so that the particles ARE the buffer, we avoid constantly copying the data around, and data only needs to really "move" when a particle is created or removed. Moving this entire process into the shader is a bit complicated and can't be done with WebGL1, so I never looked too much into it.

flowtsohg commented 3 years ago

Just to illustrate what I mean by particles being the buffer - you can check out SkeletalNode, which works in the same way. There is a shared buffer for the numerical data, and each node gets typed array references to this buffer. So for example, all of the world matrices are held in one Float32Array, which is submitted in one go to the GPU when rendering, and each SkeletalNode gets its own Float32Array viewing a part of the shared one. I cheated a little because the nodes then have non-shared stuff like parent references, object references, and the like. In C++ these would also be simple numbers, I suppose, and could also be shared.

I set up a C++ WASM WebGL demo just to humor myself. Maybe I'll attempt at making the smallest demo I can at rendering an animated MDX...we'll see. I do already have the required parsers in C++ that I wrote a while ago :P

arcman7 commented 3 years ago

Shared array buffer

Okay, I'm sort of getting how this shared buffer is used, specifically with the skeletal nodes. What would be the easiest file type to dig into for a complete understanding from layers all the way up to its object hierarchy?

Looking at the mdx particle emitter

  getSpeed(out: Float32Array, sequence: number, frame: number, counter: number) {
    return this.getScalarValue(out, 'KPES', sequence, frame, counter, this.speed);
  }

  getLatitude(out: Float32Array, sequence: number, frame: number, counter: number) {
    return this.getScalarValue(out, 'KPLTV', sequence, frame, counter, this.latitude);
  }

  getLongitude(out: Float32Array, sequence: number, frame: number, counter: number) {
    return this.getScalarValue(out, 'KPLN', sequence, frame, counter, this.longitude);
  }

  getLifeSpan(out: Float32Array, sequence: number, frame: number, counter: number) {
    return this.getScalarValue(out, 'KPEL', sequence, frame, counter, this.lifeSpan);
  }

  getGravity(out: Float32Array, sequence: number, frame: number, counter: number) {
    return this.getScalarValue(out, 'KPEG', sequence, frame, counter, this.gravity);
  }

  getEmissionRate(out: Float32Array, sequence: number, frame: number, counter: number) {
    return this.getScalarValue(out, 'KPEE', sequence, frame, counter, this.emissionRate);
  }

  getVisibility(out: Float32Array, sequence: number, frame: number, counter: number) {
    return this.getScalarValue(out, 'KPEV', sequence, frame, counter, 1);
  }

So all of those calls to this.getScalarValue are basically read operations that tap into the shared buffer?

Particles After going through a fire sprite animation video for blender, I understand sort of, what the particle system is and what it's used for; you can move and distort various images in sequence to get interesting looking "fuzzy" effects for a relatively low cost as far animations go.

This is why they, the particles, are distinctly different than models being rendered as composite layers.

Does that all sound correct to you so far? That's my current understanding of how this works atm.

flowtsohg commented 3 years ago

Particle systems typically emit simple quads that are billboarded so they face the camera no matter where you look at them from. Emit enough of these quads, add some transparency effects via texture, add some movement/gravity, and you have a special effect.

For wc3 emitters, every time a particle is emitted it needs to get some properties from the emitter based on the animation time of the owning instance, which would be the functions you listed above for ParticleEmitter. In addition every particle has its own life property, and some properties like the color are animated based on that. For instance, a fire particle might start blue-ish, and as it updates towards its death it might turn yellow-ish.

The handler has it split further. Every model has e.g. a ParticleEmitter2Object which is the equivalent of the parser object but with extra handling like loading textures and whatnot. Then every instance has its own actual emitter (extending Emitter), which references the object in the model and does its things.

There are more emitters in wc3. ParticleEmitter emits models, such as bones and guts flying all over the place when some units die. ParticleEmitter2 emits quads that are typically billboarded, but can also be set to always face the XY plane (i.e. same orientation as the ground), and all sorts of other settings that manipulate how the particles work. RibbonEmitter emits lines rather than quads, and all of the existing ribbons are chained together to form a mesh. 1st ribbon and 2nd ribbon are connected to form a quad, 2nd and 3rd are connected, and so on. The texture it uses is stretched across the entire mesh (this allows for example to have "trails" following things, like the Paladin hammer attacks). EventObjects are a group of emitters that work slightly differently, but ultimately emit either models (e.g. illidan's burning footsteps), splats which are quads flat on the terrain (i.e. follow terrain deformations), ubersplats which are like splats (any difference? 🤔 ), and sounds which are used for footsteps, death sounds and such.

On the WASM note, I started experimenting with it a bit. So far the environment is horrible. I can't get Visual Studio to understand any WASM project I tried without manually adding every path and option needed, because the given CMake files don't work correctly. Either way, I did manage to get stuff going by manually compiling in the command line. It's very slow and annoying, but it works for now. The real problem that I have for now is that allocations seem to be incredibly slow on the WASM side for some reason. i.e. trying to parse Footman.mdx (~105KB) would take 10 seconds when I was copying the buffer from JS to a newly allocated buffer in WASM. Passing the buffer in a more hacky way and avoiding the WASM-side buffer allocation made it load significantly faster, but it's still relatively very slow, most likely because of allocations in the MDLX parser itself. To be more specific, using C functions like malloc/free is extremely fast. Creating a new std::vector with a size (or reserving/resizing) is extremely slow.

arcman7 commented 3 years ago

The particle system makes sense, I believe. I'll have to test that understanding somehow.

Creating a new std::vector with a size (or reserving/resizing) is extremely slow.

Is the std::vector object a contiguous object or not? Is it possible to use arrays of bytes as the buffer?

I really don't get why allocating space to an std::vector would be slow.

flowtsohg commented 3 years ago

I started writing information about how to handle MDX models in the specs, but I ended up never quite adding all of the information I wanted. There is so much of it, it could fill a whole book chapter. Then again, I am not sure how many people would benefit from that, since programmers will eventually look at the code anyway, and it's not so complex nowadays (not so much in v4, now that was hard to read!), plus there are multiple implementations to look at.

I have no idea why std::vector is so slow either. I did a small change and moved to Rust instead :P It will take some time to learn the language properly, but at the very least getting a demo with WebGL running took a few minutes. Untitled

arcman7 commented 3 years ago

Woah. How does that work?? You can run RUST from the browser the same way you can run WASM?

flowtsohg commented 3 years ago

It also has the tools to compile to WASM and have interoperability. I think it can be compiled directly with Emscripten, and has GLFW bindings and the like as well. But...it has its own native tools that support this with no extra work. All I did was take the WebGL example and modify it a bit. The example setup recompiles and rebundles on any Rust or JS change, and reloads the page. It's pretty neat. I suppose the next step is to get an ArrayBuffer in, and see how to write a parser (and also if it doesn't take seconds 😛)

arcman7 commented 3 years ago

That is amazing. RUST really sounds like the best solution. Here's me crossing my fingers for good performance 🤞

arcman7 commented 3 years ago

Is there a link to the workflow setup you're using with RUST? Just curious to see it.

flowtsohg commented 3 years ago

I tried other demo projects before, so I am not 100%, but I think all you need to do is install Rust which comes with Cargo (Rust's NPM), download the demo from the link above, and run cargo install. For the NPM part obviously have NPM installed, and run npm run serve to run in dev-watch mode.

Meanwhile I am stuck at understanding how one is supposed to parse binary data :P

flowtsohg commented 3 years ago

It's not much, but I finally get how to read stuff. No time to do with it anything for now though.

Maybe I'll figure a nicer way to read strings... (not for tags since I'll use u32 for those, but rather for actual strings)

pub fn load_file(&self, buffer: Box<[u8]>) -> u32 {
    let mut cursor = Cursor::new(&buffer);

    // capacity=4 doesn't mean there are actual bytes on the heap, gotta have a real value
    let mut mdlx = String::from("AAAA");
    unsafe {
        cursor.read(mdlx.as_bytes_mut()).expect("NOPE");
    }

    let mut vers = String::from("AAAA");
    unsafe {
        cursor.read(vers.as_bytes_mut()).expect("NOPE");
    }

    // using the byteorder crate
    let size = cursor.read_u32::<LittleEndian>().unwrap();
    let version = cursor.read_u32::<LittleEndian>().unwrap();

    version
}
console.log(v.load_file(new Uint8Array(buffer))); // 800
arcman7 commented 3 years ago

That's super nifty, I'm gonna have to try that out soon! Probably Thursday.

console.log(v.load_file(new Uint8Array(buffer))); // 800

So was there any performance gain using rust to allocate buffers?

flowtsohg commented 3 years ago

Allocations are still slow, but not 10 seconds slow.

I am not sure if it's possible to share any kind of arrays from Rust to JS though. This includes strings, primitive arrays, vectors, whatever. Strings can be cloned in and out. Not ideal, but it is how JS works anyway. I am not sure how I could share mutable objects like vectors, matrices, vertices, etc. though. I probably shouldn't bother with making things accessible in JS and just write only what I need to render stuff, but it's somewhat discouraging that WASM is so...restricted (this has nothing to do directly with Rust or its wasm tools, but rather with how WASM has no knowledge of memory and only JS controls it - I also saw this in the C++ code, but didn't get far enough to care).

Maybe custom classes that will expose the internal arrays via methods that use indices 🤔

flowtsohg commented 3 years ago

I'll be honest, after more playing and figuring what isn't really possible...I am becoming quite skeptic about WASM. Its memory limitation is becoming more obvious as I try to write more things. Not being able to pass back to JS any shared strings or vectors, and therefore any structs that have any string or vector fields, and so on... This of course doesn't say anything about runtime performance, but how is one even supposed to make a real-world library that handles data and communicates 2-ways with JS under these limitations, I am not sure. I can see it working well for applications where access to data isn't so free, like games or applications, but when I want e.g. to have fully exposed parsers? 🤔

When it's just primitive data it looks ok...

#[wasm_bindgen]
#[derive(Copy, Clone)]
pub struct Vec3 {
    pub x: f32,
    pub y: f32,
    pub z: f32,
}

#[wasm_bindgen]
impl Vec3 {
    pub fn new() -> Vec3 {
        Vec3 { x: 0.0, y: 0.0, z: 0.0 }
    }

    pub fn read(&mut self, reader: &mut BinaryReader) {
        self.x = reader.read_f32();
        self.y = reader.read_f32();
        self.z = reader.read_f32();
    }
}
...
console.log(model.extent.min.x);
arcman7 commented 3 years ago

Not being able to pass back to JS any shared strings or vectors, and therefore any structs that have any string or vector fields, and so on...

Do you mean that literally or is it just really slow?

Also, are you pushing this scratch work up anywhere? I wouldn't mind being able to see it - maybe profile it a bit. I have a roommate who happens to be pretty good at optimizing stuff like that.

flowtsohg commented 3 years ago

Let me rephrase that. You can pass whatever you want, since "passing" means mostly moving bytes from context to context. But wasm_bindgen (Rust's native tool) doesn't support structs with dynamic memory, only ones that it can essentially do a memcpy to pass. Since strings, vectors, etc. are heap allocated, with the way Rust works they can't be memcpy'd and must be cloned for a legit deep copy. For strings like MdlxModel.name, this can be solved by not exposing them directly, but rather writing a getter/setter that do the cloning:

#[wasm_bindgen]
impl MdlxModel {
    #[wasm_bindgen(getter)]
    pub fn name(&self) -> String {
        self.name.clone()
    }

    #[wasm_bindgen(setter)]
    pub fn set_name(&mut self, s: String) {
        // the clone here is handled by wasm_bindgen with its glue code when it passes the string from JS
        self.name = s;
    }
}

But let's look at MdlxSequence:

#[wasm_bindgen]
pub struct MdlxSequence {
    name: String,
    pub start: u32,
    pub end: u32,
    pub move_speed: f32,
    pub flags: u32,
    pub rarity: f32,
    pub sync_point: u32,
    pub extent: MdlxExtent,
}

It also has a string name, so we make a getter/setter and it works. Right? No. This would allow to instantiate an MdlxSequence on the JS side, the bindings will be correct. However we still can't actually PASS any MdlxSequence from the WASM side, because it can't be copied. So let's try to add MdlxModel.get_sequence() and see what happens... Untitled

And then there's the real crux - how can WASM support vectors? even if there was a stack allocated vector (which goes against the point of vectors, but bear with me) and it could be memcopy'd, there is no mechanism to synchronize changes on the WASM and JS sides. What if one side pushes a new object, or changes the order. These changes won't be reflected on the other side. Maybe this can be solved with glue code (similar to Vue?), but it's becoming more and more messy.

There isn't enough code to really show much yet, I don't have a lot of time, and most of it was spent on understanding Rust and trying to figure how to use wasm_bindgen.

flowtsohg commented 3 years ago

Pushed a bunch of changes I had stacking up over a long time, one of them is a simple example of how to add state to the units and doodads so you don't have to do weird hacky things to not get the automated stand animations.

For example if you have your sheep in map.units, you can do sheep.state = WidgetState.WALK (from widget.ts) and it won't run the stand animations.

arcman7 commented 3 years ago

Awesome! Looking now. I just had the second dose of the Pfizer vaccine and I was completely out of commission these past few days. Excited to bounce back and try this out!

https://github.com/flowtsohg/mdx-m3-viewer/blob/827d1bda1731934fb8e1a5cf68d39786f9cb857d/clients/shared/localorhive.js#L10

So that's their endpoint for both classic and reforged assets?

arcman7 commented 3 years ago
  • The map viewer now handles both TFT and Reforged, using the isReforged argument. This also led to the map viewer client now using the new Hive API, which serves Reforged/SC2/etc. files.
  • Added drag & drop folder handling to the shared clients code, and the sanity test client now uses it.
  • SimpleOrbitCamera now correctly moves on the XY axes also when the canvas is not square.
  • Rather than isReforged, map stuff now uses the buildVersion directly. This is also exposed now directly as War3MapW3i.getBuildVersion() for convenience.

Definitely gonna try a reforged map on chrome canary's webGPU :D

flowtsohg commented 3 years ago

I am not sure if it will actually load a Reforged map, I guess my comment wasn't very explanatory. It just supports loading all of the base game files using the new structure :P

Although technically I think the files used by the map viewer should support Reforged? I don't think I ever tried.

The API doesn't have TFT files unfortunately. You'll have to use the old url that gets files from an unpacked game. Note that the API is much more than a simple GET. Aside from supporting SC2 (and I think some other Blizzard stuff?) it does the whole hierarchy thing for WC3, and I believe fully supports overriding the root as you can do in the Reforged WE, such as starting a path with _hd.mod: (or something similar, I don't quite remember). Unfortunately it still doesn't support a tileset parameter, which is why the map viewer doesn't render the right cliff textures.

Like I said about WebGPU, it can probably increase the performance a bunch, but it won't do it even if you can emulate WebGL code - it will require a proper rewrite of the rendering code so that it works with its own rendering pipeline. If I understand it correctly, from the little reading I did, it can actually have a very big impact. It's essentially Vulkan on the web, and I can see how most of the GL calls can go away. But there are two questions to be asked. First, am I motivated to rewrite all of the rendering code for some map viewer barely anyone ever even saw exists? and for what, it will still perform too poorly to really be used as a base for a more interactive thing. Second there's the question of support. When I started this viewer, it was always using the newest slickest features on the web as they came online. Support was poor for different features, and I had to implement some things manually for different browsers, and user browsers supported or didn't support all sorts of things. It was a mess. Exciting, but a mess. Do I want to go that route now after 10 years of constantly fixing and upgrading the same code over and over and over and ensuring it works on as many devices while still being mostly updated? I don't know. That's why I never updated to WebGL2 after all.

arcman7 commented 3 years ago

Second there's the question of support. When I started this viewer, it was always using the newest slickest features on the web as they came online. Support was poor for different features, and I had to implement some things manually for different browsers, and user browsers supported or didn't support all sorts of things. It was a mess. Exciting, but a mess. Do I want to go that route now after 10 years of constantly fixing and upgrading the same code over and over and over and ensuring it works on as many devices while still being mostly updated? I don't know. That's why I never updated to WebGL2 after all.

I am almost certain that webGPU has the full support of Safari, Mozilla, new Edge (chromium), and ofc Chrome. There shouldn't be much mess as far as that API goes. And yes, webGPU won't magically make everything better, but considering that it does have better performance with your code, it would be the place that I would test reforged asset rendering. I get that it would be a lot of work, but as I catch up here, slowly, maybe I can lend a hand - at least to re-writing the shaders? It shouldn't be too hard to pick up that portion anways.

flowtsohg commented 3 years ago

I don't mean whether it has the support of the browser vendors, rather is it actually supported right now for the average user using their browser, and the answer to that is no. It will be in the future probably, as vendors complete their implementations and remove the flags that hide them, and as people circulate whatever old OpenGL1/2 capable phones they still have.

If the code could be decently rewritten to somehow support both APIs it would be great, but I don't know enough about WebGPU or have much time or motivation for the time being to do much.

arcman7 commented 3 years ago

Oh, okay, that makes more sense now. I see why you're more interested in exploring the WASM/RUST alternative for the time being. However, once it is released publically, it could make a huge impact on the performance here... if I were to get as far as porting over a few basic shaders, connect it into the current setup with some sort of hasWebGPU flag, and show you what this would look like in a feature branch, would that be something you'd be interested in seeing?

flowtsohg commented 3 years ago

I think that the shaders are the least of the concerns when it comes to WebGPU. WebGL shaders can be compiled to it, and it's not hard to make properly compiled binaries as well. My concern is...all the rest. Everywhere that WebGL is used will be to be changed in some way. Does this mean an entirely separate handler? perhaps branching on every piece of code that calls GL? perhaps isolate all of the GL state (most of it is somewhat isolated already) and then add branching on specific methods but keep the handlers more or less the same. Maybe something else. I really don't know enough about WebGPU/Vulkan to say how different the code would be. It seems like it will mostly let move most of the runtime calls to be instead at setup, which would be nice. I don't know how dynamic stuff like bone textures or particle buffers would work. I don't really have time to work on the Rust thing either :/

arcman7 commented 3 years ago

I really don't know enough about WebGPU/Vulkan to say how different the code would be.

Currently, in the process of relocating to Texas, I'll have a branch up shortly after where I'll start trying to see what that looks like.

arcman7 commented 3 years ago

Hey, back on this again lol. Who knew relocating and starting a new job could take up so much time 😄

flowtsohg commented 3 years ago

Welcome back! Not much changed around here though :P

arcman7 commented 3 years ago

Bigger update

  • The clients now get built with webpack. I might split it to two different webpack configs, still not sure about the setup.
  • Added the downgrader client, to make Reforged maps openable in old editors. It doesn't copy triggers, and doesn't support all of the Reforged files (see the issues).
  • Copied Retera's Warsmash handling of axis-specific billboarding for skeletal nodes.
  • Added a tiny bit Reforged stuff to the Jass2 context. But really just a tiny bit.

I see some new commits!

Also, I realized that that the choppy animation I was experiencing was not an issue of mdx-m3-viewer's runtime speed... It looks like something else is causing the issues. I tried a different model (a brown wolf), and also tried bringing the number of model instances down to < 5, and the choppiness was still there. I need to figure out why/how I'm doing this wrong.

flowtsohg commented 3 years ago

I updated lots of small things, but nothing major changed or probably will change(?)

Are you running on a non-60Hz monitor? if so, make sure to control dt when calling viewer.update() or viewer.updateAndRender(), see the last section of the README. Also if running on a 60Hz monitor it would make the animations less hectic if you have slowdowns for some reason, which you shouldn't when rendering one model 🤔

arcman7 commented 3 years ago

Variable frames per second ModelViewer.update() and ModelViewer.updateAndRender() have an optional dt argument.

dt controls how much time in miliseconds to advance the animations.

By default, dt is set for 60FPS, or 1000 / 60.

How can you actually match the rate at which ModelViewer.update() get's called?

let fps = 1000/60; If I want to run the animations at fps that would mean waiting for the duration of fps before calling update again. So if I want to advance the animations by fps, how can I be assured that from the javascript event-loop these update functions also get called at a frequency of fps?

flowtsohg commented 3 years ago

To clarify, the idea is not to set the FPS of the viewer, but rather match the FPS of the browser.

The recommended thing in JS is to use requestAnimationFrame, which runs at the monitor's refresh rate.

Since JS controls the actual waiting, we can check what the time is compared to last frame, and see what the FPS is like that, and if you know the FPS you can control the animation speed.

You could of course pass any dt you want if you want to affect time, and there's also ModelInstance.timeScale to affect the animation of specific instanes (although this isn't relevant to the issue).