Closed DoubleStyx closed 2 weeks ago
I'm pretty sure applying deduplicated field changes is thread-safe? Unless you're doing something like deleting or recreating a component. Those changes might have to be handled serially or something.
We can place object/component changes into a separate set, apply that first in parallel so the necessary memory regions get allocated, and then apply the field updates in parallel. I think this would work?
yeah, that should work
Not sure exactly how to do this. The BatchQueue is a Queue<Batch>
, where Batch has a Queue<RenderTask>
renderTasks
and a bool
isComplete
. The RenderTask itself contains a RenderSettings object and TaskCompletionSource
public class RenderSettings
{
public float3 position;
public floatQ rotation;
public int2 size = @int2.One * 1024;
public TextureFormat textureFormat = TextureFormat.ARGB32;
public CameraProjection projection;
public float fov = 60f;
public float ortographicSize = 8f;
public CameraClearMode clear;
public colorX clearColor = colorX.Clear;
public float near = 0.01f;
public float far = 2000f;
public List<Slot> renderObjects;
public List<Slot> excludeObjects;
public bool renderPrivateUI;
public bool postProcesing = true;
public bool screenspaceReflections;
public Func<Bitmap2D, Bitmap2D> customPostProcess;
public RenderSettings(float3 position, floatQ rotation)
{
this.position = position;
this.rotation = rotation;
}
public static RenderSettings Equirectangular(
float3 position,
floatQ rotation,
int2 resolution)
{
return new RenderSettings(position, rotation)
{
size = resolution,
fov = 360f
};
}
}
Maybe we could deduplicate based on the render objects being used? But this might be a bit of overhead every time we attempt to enqueue.
Closed because of #23's explanation.
I'll actually open this one again just as a research task, to see if there's any way possible to do this safely.
Theoretically this might be doable, but it's safer to assume the order does matter. Testing partially enforced this fact as well.
Currently the task queue is single-threaded, which is generally fine when running in sync mode (unless you get very unlucky with timing). In async mode, the changes are necessarily batched, so it's important to get them processed as quickly as possible. However, it's not deterministic to just run all the changes in parallel; some values may have been written to multiple times. To address this, we might want to switch the batch's internal queue to a set, since the ordering doesn't matter but the uniqueness does.