TeamAtomECS / AtomECS

Cold atom simulation code
GNU General Public License v3.0
46 stars 12 forks source link

Panicks when more than 16 lasers #55

Closed damienBloch closed 3 years ago

damienBloch commented 3 years ago

I was trying to increase the number of cooling lasers to replicate sidebands. Every time I add more than 16 lasers to the world, a thread panicks.

Any thoughts on what could be causing the problem ?

ElliotB256 commented 3 years ago

Currently the laser samplers store per-laser quantities in a fixed-size array of length 16, see laser::COOLING_BEAM_LIMIT. We originally used a variable length (and heap allocated) Vec<> for laser samplers, but there was a performance increase for moving to fixed-size arrays. You can see more about the performance tweaks here.

Short fix, you could raise the cap (just modify the COOLING_BEAM_LIMIT). Long term, we can return to whether it's worth using Vec<> again (or some other faster container)

ElliotB256 commented 3 years ago

Note that performance was faster with fixed arrays, but we could debate whether it's worth the flexibility. A few posts down, it says:

17.7s compared to 22.5s using vec!

damienBloch commented 3 years ago

Ok, thanks ! I'll have a look.

It just that the 16 limits seemed a bit low. A simple MOT already has 6 beams so it's fairly easy to go above 16.

ElliotB256 commented 3 years ago

Yeah agreed, especially when you also have e.g. sidebands in some setups.

ElliotB256 commented 3 years ago

Could you profile the benchmark simulation as a function of the array size?

damienBloch commented 3 years ago

I simply measured the duration time for the benchmark example on a i7-1165G7 @ 2.80GHz. Not sure how to do a detailled profiling though.

benchmark

I went a bit high for the upper limit because I want to have a MOT with many sidebands but I imagine it's an extreme case.

I had to remove #[derive(Serialize, Deserialize)] for ExpectedPhotonsScatteredVector and ActualPhotonsScatteredVector for this test, because Rust doesn't automatically implement these traits for arrays of size > 32.

ElliotB256 commented 3 years ago

Thanks Damien, there's obviously an overhead with shuffling larger arrays around in memory (as expected). It looks like 32 would be a safe short term extension (which allows serialize,deserialize traits to be preserved), and we can return to whether a vec approach would be more suitable in other cases (I'd be interested to see how the overhead scales for vec versus larger cooling_beam_limit

damienBloch commented 3 years ago

Also maybe a more explicit error message could be helpful ? Otherwise it's unclear why the number of entities with CoolingLight components is limited.

I also just noticed that the lasers are cached with a LASER_CACHE_SIZE in some files, but I didn't change this value in the previous runs.

ElliotB256 commented 3 years ago

The LASER_CACHE_SIZE plays a different role. It defines a fixed size slice which we iterate over. It was more important in the original vec implementation, because it allowed us to process chunks of the vec in batches of a size known at compile time (and so enable SIMD floating point ops). Since moving to fixed size arrays, it's possible any performance gain is no longer worth the extra maintenance/complexity. However, let's bear it in mind when we return to profiling whether a fixed size/vec approach is worth it.

minghuaw commented 2 years ago

Have you considered using constant generics to pass the control over maximum number of lasers to the user while still having a fixed-length array?

ElliotB256 commented 2 years ago

That's a great idea. There will be a rewrite at some point in the future to support other cooling transitions/models of scattering rates, and we should add the change then.