psiz-org / psiz

A python package for inferring psychological embeddings.
https://psiz.org
Apache License 2.0
31 stars 7 forks source link

Explorations of parameter sensitivity #21

Open rgerkin opened 3 years ago

rgerkin commented 3 years ago

@roads We've been trying to explore the sensitivity of our fit parameters, and using e.g. Fisher information as a measure of uncertainty (by computing the second partial derivatives of parameters at the MLE). It's not totally clear to me what the most efficient way to do this, but our approach has been something like:

  1. Extract all the fitted parameters from the model, and call these the initial values for the next step e.g. rho_0, beta_0, etc.
  2. Run a function like this:
def plot_mse(model, var_scope, var_name, var_0, multiples):
    mses = []
    for multiple in tqdm(multiples):
        if var_scope == 'similarity':
            setattr(model.kernel.similarity, var_name, var_0*multiple)
        elif var_scope == 'distance':
            setattr(model.kernel.distance, var_name, var_0*multiple)
        model.compile(**compile_kwargs)
        result = model.evaluate(ds_obs_train, verbose=0, return_dict=True)
        print(var_0*multiple, result['mse'])
        mses.append(result['mse'])
    plt.plot(var_0*multiples, mses)

multiples = np.logspace(-1, 1, 100)  # Scan from 1/10th to 10x the fitted value of the parameter:
plot_mse(model, "distance", "rho", rho_0, multiples)

Which then plots the MSE vs the value of one parameter, e.g. rho. One issue is that sometimes the MSE is almost totally insensitive to that parameter, so I wonder if we are doing something wrong, and particularly if ds_obs_train is not the right chunk of data (we generate it the same way you do in your examples).

As a sort of positive control, I even tried something like the above but with changing the embedded coordinates themselves. This should completely wreck MSE, but even that doesn't really work. We are just directly assigning to the embedding, e.g. model.stimuli.embedding.weights[0] *= multiple, though maybe this isn't actually changing the coordinate and that is more of a window into the value and not the actual value.

If you have time to meet with us about any of this next week or beyond let me know.

rgerkin commented 3 years ago

@colemanliyah @slamm1

roads commented 3 years ago

If I understand the setup, I think ds_obs_train is the correct chunk of data.

For updating the model weights, I would try using TF assign. Python's builtin setattr may be a workable solution, but I don't know how it interacts with the internals of TF's graph. For the embeddings weights, this would look something like model.stimuli.embedding.embeddings.assign(new_weights).

Yes, happy to chat. I'll send you a DM.