Closed AnderGray closed 10 months ago
All modified and coverable lines are covered by tests :white_check_mark:
Comparison is base (
ebb1b7e
) 98.68% compared to head (ffef5dc
) 98.76%.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Thanks for the PR, looking very nice overall!
The only thing I don't care for is the logging. It's very noisy as it is now.
I would prefer it if we remove it for now and make a larger PR for nice logging in more places then just subset.
Thanks for the review!
Yes, I agree about the logging. However, I am finding it useful when running expensive models. For now, would it work if we replace the @info
with @debug
?
The other thing we could consider adding is an example of an ablation test for all the subset methods. Something that plots the result as a function of input dimension
and input samples N
. Something similar to what's done in this example: #138. I have a script ready which does this. Can also add later once we fix that issue.
I'd be fine with using @debug
for now.
Looks good from my side.
What do you think about having an option without using Markov-chain, like we had it before?
Thanks, I'm merging for now.
What do you think about having an option without using Markov-chain, like we had it before?
Does that make sense though? Does it still sample from the correct conditional pdf? Maybe we should run some benchmarks.
I think it's worth a test. I was getting sensible answers with the previous method.
Probably biased in some way ... I'm not sure how it can be tested. The additional parallelizability of it makes it interesting.
An implementation of adaptive conditional subset simulation (subset infinity) from [1,2].
The idea behind the algorithm is to adaptively tune the standard deviation (
s
in our implementation) of the proposal distribution towards an optimal acceptance rate. Which they say isa_star = 0.44
.For each level, the algorithm iteratively computes the acceptance rate. This is done by partitioning the seeds (say into 4), and running usual subset infinity 4 times, and computing a scaling factor
λ
, which updates the stds_new = min(1, λ*s)
. In this case,s
is updated 4 times for each level.Here's my data structure:
The new properties:
λ
: initial scaling factor. Recommended to be set to1
Na
: the number of times to update the scaling factor in each level. The number of seeds is partitioned this many timesIf
Na=1
, the update will only happen at each level.Note, I've implemented the structures as mutable, because
λ
needs to be updated/saved at each level. Perhaps there's a better way, but this way the sameprobability_of_failure
function could be used.Here's the constructor:
The second check is important to ensure the seeds can be partitioned into
Na
parts.Further future improvements
LOGGING
It would be good to Logging.jl the progress of the subset simulation algorithms. I've been running quite expensive simulations, and it's useful to print progress. I've added some, however I think they can be improved... Currently too much information is being printed.
Refs
[1] Papaioannou, I., Betz, W., Zwirglmaier, K., & Straub, D. (2015). MCMC algorithms for subset simulation. Probabilistic Engineering Mechanics, 41, 89-103.
[2] Chan, J., Papaioannou, I., & Straub, D. (2022). An adaptive subset simulation algorithm for system reliability analysis with discontinuous limit states. Reliability Engineering & System Safety, 225, 108607.