Leafwing-Studios / Emergence

An organic factory builder about adapting to a changing world.
https://leafwing-studios.github.io/Emergence/
Apache License 2.0
268 stars 35 forks source link

Swap to continous density field strategy for signal management #511

Open alice-i-cecile opened 1 year ago

alice-i-cecile commented 1 year ago

After talking over the design and benchmarking results with @plof27 today, we've decided on a strategy to pursue for signal propagation (#1).

Key findings:

  1. Signal diffusion takes wildly more time than any other operation. See the trace in #506.
  2. Signals are accessed very rarely compared to diffusion costs. Consider a worst-case for look-up. Suppose we have 100 different signal types (very low), a unit on each tile (very high), and they must act each frame, following a signal gradient (very high). This gives us 7 lookups per tile. However, for diffusion, we have 7 * 100 lookups for each tile, and 100 mutations for each tile. This is backed up by the trace above using the MVP implementation.
  3. Signals are generally quite sparse. In factory builders, you want a large variety of items and structures. Most of the time, that information is going to be irrelevant to anything that isn't fairly nearby: they'll always have something better to do. As a result, most signals should be treated as 0 in most places.
    1. The actual signal propagation logic can be quite simple and still works well!

For some more realistic numbers from the initial MVP:

For a discretized strategy, that gives us:

We can reduce how frequently our signals pass runs, but it probably can't be slower than once every half second or there will be perceptibly wrong results:

Obviously signal diffusion is dominating, but decay matters too!

Ideally our map is:

Every second, we must process 10^6 25 = 2.5 10^7 diffusion steps per second. Which gives us 4e-8 seconds for each, aka 40 nanoseconds. If we want a GPU with matrix acceleration, we can't easily take advantage of sparsity. Thankfully though, unlike traditional pathfininding, all operations are linear with respect to map size, and scale very slowly with unit count (because there's so many fewer operations).

So, instead of doing a discretized storage approach, we propose the use of a density field strategy. The fundamentals are fairly straightforward, and come from applied math.

  1. Track relevant objects (signal emitters, barriers, special signal modifiers).
  2. For each object type, determine a differential equation of how they affect the field strength near them.
  3. Combine these equations to determine (for each signal type) an approximation that we can use to get the field strength at a position and time.
  4. Use this approximation to lazily look-up the values when we need them.

We only actually care about the value of these signals where these fields are evaluated! And because they're evaluated so rarely (see above), we have a massive budget to work with.

Let's pessimistically round up to 0.5 lookups per tile per second. At 10^6 tiles, that means we have 2000 nanoseconds to do each computation! As a bonus, in many cases we only want very approximate results (one significant figure is basically fine for following a gradient upstream or choosing a goal at random), and so we can stop the computation very early.

Of course, the big challenge is actually writing the math to do this, especially as we add more game mechanics. But diffusion equations are a very well-studied field, so I'm feeling quite optimistic.

alice-i-cecile commented 1 year ago

Truly huge bases in Factorio look to be about 10k-100k tiles in width: getting even 1k width should still make the map feel sprawling given the increased diversity.

RobWalt commented 1 year ago

If we want to go the diffusion equation route, then it looks like a classical application of the heat equation plus some general decay. I can look into how we could implement it efficiently for this use case in the coming days when I'm back at home.

Just a general question: By signals we mean "ant signals", right? I was a bit confused at first because I thought the issue is related to event processing, but I think that's not it.

Also a general remark: Numerical simulations of differential equations are mostly implemented with arrays/vectors (afaik). This implies that we might need to refactor the signal management from the hashmap approach to a vectorized approach. I could imagine that this refactoring might result in some performance gains aswell, so it might be a good decision to split the issue into these two sub tasks and look at the benchmarks inbetween.

alice-i-cecile commented 1 year ago

Yep, heat equation plus decay is the plan :) The biggest stumbling blocks for me right now are:

  1. Proof of concept of architecture based on the density field approach.
  2. Modelling obstructions: signals must pass around structures so they still work for pathfinding.

References or prototypes on either would be super valuable; I'd love to collaborate on this work!

By signals we mean "ant signals", right?

These are the general purpose pathfinding + goal selection tool that we're using to drive AI here :) Might need a better name? The basic idea though is:

  1. Everything gives off signals based on what it is, which diffuse to nearby tiles.
  2. Allied structures give off push/pull signals requesting the items they want.
  3. Units pick a goal by sampling their local signals.
  4. Units walk up the signal gradient for pathfinding.

Also a general remark: Numerical simulations of differential equations are mostly implemented with arrays/vectors (afaik). This implies that we might need to refactor the signal management from the hashmap approach to a vectorized approach.

Definitely standard! Our needs are quite different here though: rather than one field that we care about modelling accurately everywhere, we have many fields that are zero in most places (because most items / structures will be clustered to little factory hubs). As a result, we probably won't actually benefit much from the increased cache locality of using something like an array or space filling curve.

Austreelis commented 1 year ago

I'm interested in this problem, I'll start looking at it in the next few days if that's alright :) I'm quite unfamiliar with emergence's simulation code, but it doesn't look too complex.

alice-i-cecile commented 1 year ago

All of the relevant code for this problem can be found in https://github.com/Leafwing-Studios/Emergence/blob/main/emergence_lib/src/signals.rs :)

RobWalt commented 1 year ago

Here are some theoretical thoughts on a simple first approach which doesn't need to make diffusion operation calculations each frame. The approach is based on the heat equation as mentioned above and treats all signals as diffusive signals. (For the future we might also look into other types of equations like the, for example, the wave equation for bird sound transmission or similar stuff). The approach also doesn't consider barriers or signal modifiers yet, but it should be extensible in that regard with some additional work.

A new approach

Let's start with some easy things: We are modelling diffusion + decay, so the approach will be parametrized by:

The rest of the approach strictly follows the steps that are mentioned in the issues description:

  1. Track relevant objects (signal emitters, barriers, special signal modifiers).

As mentioned in the starting paragraph, we simplify the model for now (no barriers or modifiers). The only properties of signals we need to track are:

  1. For each object type, determine a differential equation of how they affect the field strength near them.

Here we can utilize what's called a "fundamental solution" to the heat equation. It is basically a closed form solution (aka. function) which describes the values of the heat equation for arbitrary points in space and time. Here is what it looks like:

image

  1. Combine these equations to determine (for each signal type) an approximation that we can use to get the field strength at a position and time.

For every signal type, we can create initial conditions (distribution function of the signal for t == 0) of the heat equations over the whole domain which can be an arbitrary function. We would craft this point wise with all the signal strengths from tiles of the same signal type, e.g.

g(x,y) = signal_strength[(x,y)]

This initial condition can be integrated into the fundamental solution of the heat equation to get a general solution of the form

image

  1. Use this approximation to lazily look-up the values when we need them.

If a signal is accessed we basically need to do the following:

Practical notes

Batching over time

It's probably best to run multiple heat equation calculations for one cell and batch time with respect to time to avoid weirdness. The batched calculations can then be averaged to produce a compound result. The following scenario tries to explain why this might be important. Think of:

If we use the approach from above and use t = 1 second, the signal that is far away would also only have 1 second to diffuse. This would probably imply that it wouldn't reach the target tile in any meaningful way. So it might be better to batch calculations for every 2 seconds or so. Additionally it might make sense to not consider any signal for the initial condition that is older than a threshold or further away than a given distance to prevent too many batches.

Evaluating the general solution

The general solution involves the convolution of two functions. This is a pretty complex mathematical operation which we can, however, calculate pretty efficiently by FFT. For those who haven't seen it yet, you can find a good 3B1B video on it here.

Also: Most of the fundamental solution can be calculated at compile time (there are many constants in the function) to reduce the computational footprint of the function further.

alice-i-cecile commented 1 year ago

Excellent starting analysis, thank you!