patrick-kidger / equinox

Elegant easy-to-use neural networks + scientific computing in JAX. https://docs.kidger.site/equinox/
Apache License 2.0
1.94k stars 130 forks source link

Optimally training ensembles. #693

Open BabaYara opened 3 months ago

BabaYara commented 3 months ago

What is the recommended way to train an ensemble model like the one in the documentation? I have re-produced the model here and would appreciate some help with this.


key = jax.random.PRNGKey(0)
keys = jax.random.split(key, 8)

# Create an ensemble of models
@eqx.filter_vmap
def make_ensemble(key):
    return eqx.nn.MLP(2, 2, 2, 2, key=key)

mlp_ensemble = make_ensemble(keys)

# Evaluate each member of the ensemble on the same data
@eqx.filter_vmap(in_axes=(eqx.if_array(0), None))
def evaluate_ensemble(model, x):
    return model(x)

evaluate_ensemble(mlp_ensemble, jax.random.normal(key, (2,)))

# Evaluate each member of the ensemble on different data
@eqx.filter_vmap
def evaluate_per_ensemble(model, x):
    return model(x)

evaluate_per_ensemble(mlp_ensemble, jax.random.normal(key, (8, 2)))
BabaYara commented 3 months ago

I am currently doing something along these lines:

def v_pred(model, X): 
    return jax.vmap(evaluate_ensemble, (None, 0))(model, X)  

def loss_fn30(model, X, Y):    
    pred = jnp.squeeze(v_pred(model, X)) 
    loss = pred - jnp.expand_dims(Y, 1)  
    l6    = jnp.square(loss)      
    wt0    = jnp.where(l6 < 2e-16, 0.0, 1.0) 
    return jnp.mean(wt0*l6) 

LR_rate = 1e-7
init_learning_rate = jnp.array(LR_rate)  
opt1        = optax.inject_hyperparams(optax.adamw)(learning_rate=LR_rate, weight_decay=5.0)   
clipping    = optax.inject_hyperparams(optax.clip_by_global_norm)(max_norm = 1e-4)            
optimizer1  = optax.chain(clipping, opt1)                       function
opt_state1  = optimizer1.init(eqx.filter(mlp_ensemble, eqx.is_array))                                        # Chain the optimizer with the clipping function

@eqx.filter_jit
def train_per_ensemble(model, x, y, state):
    vals, grads = eqx.filter_value_and_grad(loss_fn30)(model, x, y)
    updates, state = optimizer1.update(grads, state, model)
    model = eqx.apply_updates(model, updates)
    return model, state, vals
lockwo commented 3 months ago

1) if you add python to after the "```" you can get nice coloring on the code, like this:

print("equinox is great")

2) a cursory glance of your code and it seems ok, but maybe there is a more specific question here? Is something going wrong with the code? In general, I treat ensembles of models the same way. Its just a pytree and I deal with the differences in the loss function usually

BabaYara commented 3 months ago

First question will be in relation to the loss. I felt like I was forcing the models to converge with the way my loss is structured. So I would appreciate any tips on training with the loss such that it treats models differently.

The second has to do with some optimizations I was able to make with single model training speed ups that I cannot emulate here . I was using the scan function to optimize the training loop as below but cant seem to use the same idea with ensembles.


ars, sts = eqx.partition(model, eqx.is_array)
arrs, uf = flatten_util.ravel_pytree(ars)
opt_state1  = optimizer1.init(arrs) 
@eqx.filter_jit
def loss_fn30(arrs, X, Y,lvl):  
    model = eqx.combine(uf(arrs), sts)  
    pred = v_pred(model, X)  
    loss = pred - Y 
    bce    = jnp.square(loss)       
    return  jnp.mean(bce)

@eqx.filter_jit
def make_fn(X, Y, arrs, lvl):
    grads = jax.grad(loss_fn30)(arrs, X, Y, lvl) 
    return grads

@eqx.filter_jit
def make_step02(X, Y, LR, arrs, lvl, state):  
    gradss = make_fn(X, Y, arrs, lvl)
    updates, state = optimizer1.update(gradss, state, gradss)   
    return LR, gradss, lvl,state

@eqx.filter_jit
def step_function(carry, x_y):
    LR, arrs, wght, state = carry
    XX_slice, YY_slice = x_y
    LR, arrs, wght, state = make_step02(XX_slice, YY_slice, LR, arrs, wght, state)
    return (LR, arrs, wght, state), None   

@eqx.filter_jit
def multiple_steps(XX, YY, LR, arrs, wght, state):
    inputs = jnp.expand_dims(XX, axis=1), jnp.expand_dims(YY, axis=1)
    (LR, arrs, wght, state), _ = jax.lax.scan(step_function, (LR, arrs, wght, state), inputs)
    return LR, arrs, wght, state

    LR, arrs, _, opt_state1 = multiple_steps(XX, yy, LR, arrs, wght,opt_state1)
lockwo commented 3 months ago

The loss function is highly dependent on the problem. Some ensembles you could probably just vmap a loss over, others you need to manually inspect each one.

Inre scan, if they model isn't even if the inputs to loop over, there shouldn't be any issues with it in principle