Justinezgh / muse

Marginal Unbiased Score Expansion experiments
MIT License
0 stars 0 forks source link

Check how muse behaves when we increase the number of latent variables #2

Open Justinezgh opened 1 year ago

Justinezgh commented 1 year ago

For these 3 experiments I use the following MUSE algorithm configuration:

theta_start = jnp.array([0.3, 0.8])

prob.solve(
    result=result,
    α=0.2,
    θ_start=theta_start,
    θ_rtol=0,
    z_tol=1e-2,
    progress=True,
    maxsteps=100,
    nsims=100,
    rng=jax.random.PRNGKey(1)
)

simulator configuration:

nb_latent_variables = 100

model = partial(
    lensingLogNormal, 
    N=nb_latent_variables,
    map_size=5,
    gal_per_arcmin2=30,
    sigma_e=0.2,
    model_type='lognormal',
   non_gaussianity=0.03
)

image Convergence: image

Loss of the last 90 iterations: image


simulator configuration:

nb_latent_variables = 200

model = partial(
    lensingLogNormal, 
    N=nb_latent_variables,
    map_size=5,
    gal_per_arcmin2=30,
    sigma_e=0.2,
    model_type='lognormal',
   non_gaussianity=0.03
)

image

Convergence: image

Loss of the last 90 iterations: image


simulator configuration:

nb_latent_variables = 250

model = partial(
    lensingLogNormal, 
    N=nb_latent_variables,
    map_size=5,
    gal_per_arcmin2=30,
    sigma_e=0.2,
    model_type='lognormal',
   non_gaussianity=0.03
)

image

Convergence: image

Loss of the last 90 iterations: image

Justinezgh commented 1 year ago

To do

Update: I changed it to [0.33, 0.83] image the results are below

Justinezgh commented 1 year ago

For these 3 experiments I use the following MUSE algorithm configuration:

theta_start = jnp.array([0.33, 0.83])

prob.solve(
    result=result,
    α=0.2,
    θ_start=theta_start,
    θ_rtol=0,
    z_tol=1e-2,
    progress=True,
    maxsteps=100,
    nsims=100,
    rng=jax.random.PRNGKey(1)
)

simulator configuration:

nb_latent_variables = 100

model = partial(
    lensingLogNormal, 
    N=nb_latent_variables,
    map_size=5,
    gal_per_arcmin2=30,
    sigma_e=0.2,
    model_type='lognormal',
   non_gaussianity=0.03
)

image Convergence: image Loss of the last 90 iterations: image


simulator configuration:

nb_latent_variables = 200

model = partial(
    lensingLogNormal, 
    N=nb_latent_variables,
    map_size=5,
    gal_per_arcmin2=30,
    sigma_e=0.2,
    model_type='lognormal',
   non_gaussianity=0.03
)

image

Convergence: image Loss of the last 90 iterations: image


simulator configuration:

nb_latent_variables = 250

model = partial(
    lensingLogNormal, 
    N=nb_latent_variables,
    map_size=5,
    gal_per_arcmin2=30,
    sigma_e=0.2,
    model_type='lognormal',
   non_gaussianity=0.03
)

image

Convergence: image

Loss of the last 90 iterations: image


simulator configuration:

nb_latent_variables = 500

model = partial(
    lensingLogNormal, 
    N=nb_latent_variables,
    map_size=5,
    gal_per_arcmin2=30,
    sigma_e=0.2,
    model_type='lognormal',
   non_gaussianity=0.03
)

(I don't have the full field contour for this one because 500 * 500 was too much for my GPUs memory) image

Convergence: image

Loss: image

Justinezgh commented 1 year ago

To do

  • [x] change the starting point theta (was set to the fiducial params) to check if it can still converge

Update: I changed it to [0.33, 0.83] image the results are below

  • [ ] reduce the step size to make the score closer to zero