Caltech-IPAC / kete

Kete Solar System Survey tools
https://Caltech-IPAC.github.io/kete
BSD 3-Clause "New" or "Revised" License
12 stars 1 forks source link

PCK switch to ShardedLock #138

Closed dahlend closed 1 month ago

dahlend commented 1 month ago

SPK was switched to sharded lock a while ago, but PCK was missed. This is a significant performance improvement for multi-thread earth orientation queries.

fmasci commented 1 month ago

Can you tell me more about this.

Frank

On Oct 15, 2024, at 7:35 AM, Dar Dahlen @.***> wrote:

SPK was switched to sharded lock a while ago, but PCK was missed. This is a significant performance improvement for multi-thread earth orientation queries.


You can view, comment on, or merge this pull request online at:

https://github.com/Caltech-IPAC/kete/pull/138

Commit Summary

File Changes

(1 filehttps://github.com/Caltech-IPAC/kete/pull/138/files)

Patch Links:

— Reply to this email directly, view it on GitHubhttps://github.com/Caltech-IPAC/kete/pull/138, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ADXUBIJPHUBNS6F7S73NKKDZ3UR3ZAVCNFSM6AAAAABP7KCNIOVHI2DSMVQWIX3LMV43ASLTON2WKOZSGU4DQOJXHAZDIMY. You are receiving this because you are subscribed to this thread.Message ID: @.***>

dahlend commented 1 month ago

Spice kernels are stored in memory shared between threads. Having multiple threads access the same memory can lead to race conditions, where you may make a change to memory while some other thread is reading it. the RwLock, (Read/Write lock) method means you have a Mutex (basically a thread safe flag), which indicates that the chunk of memory you are using is either in read or write mode. If it is in read mode, all the threads can safely read simultaneously. If it is in write mode, only 1 thread may use it at once.

The standard implementation of the read write lock in rust assumes about an equal usage of read and write, however for our needs we typically write once (or very infrequently), and only read from that point forward. This is a very one-sided usage, and the assumptions which the RwLock operate under are not ideal, leading to poorer performance. I tested several alternate implementations about 6 months ago and settled on the ShardedLock, which does the same thing, but the implementation is more friendly for the read heavy conditions we experience. It was about an order of magnitude speedup when using many threads in contention for the spice memory.

dahlend commented 1 month ago

I unfortunately only put the change in the the SPK reader, which is where we need it the most, but forgot to change the PCK reader. This PR just updates that.

fmasci commented 1 month ago

Got it. Thanks.

Frank

On Oct 15, 2024, at 9:16 AM, Dar Dahlen @.***> wrote:

Spice kernels are stored in memory shared between threads. Having multiple threads access the same memory can lead to race conditions, where you may make a change to memory while some other thread is reading it. the RwLock, (Read/Write lock) method means you have a Mutex (basically a thread safe flag), which indicates that the chunk of memory you are using is either in read or write mode. If it is in read mode, all the threads can safely read simultaneously. If it is in write mode, only 1 thread may use it at once.

The standard implementation of the read write lock in rust assumes about an equal usage of read and write, however for our needs we typically write once (or very infrequently), and only read from that point forward. This is a very one-sided usage, and the assumptions which the RwLock are not ideal, leading to poorer performance. I tested several alternate implementations about 6 months ago and settled on the ShardedLock, which does the same thing, but the implementation is more friendly for the read heavy conditions we experience. It was about an order of magnitude speedup when using many threads in contention for the spice memory.

— Reply to this email directly, view it on GitHubhttps://github.com/Caltech-IPAC/kete/pull/138#issuecomment-2414459771, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ADXUBINICFOXZHCNUQKSTNLZ3U5VVAVCNFSM6AAAAABP7KCNIOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMJUGQ2TSNZXGE. You are receiving this because you commented.Message ID: @.***>