vsavram / AM205-Project

0 stars 0 forks source link

Updates and Questions on Scalar_Delta / Cosine_Similarity_Random_Index #4

Closed jscuds closed 3 years ago

jscuds commented 3 years ago

@msbutler , I thought I'd be able to finish both of these...but I ran into some issues with the latter case.

UPDATES:

_Scalar_DeltaDemo:

_Cosine_Similarity_RandomIndex:

If you have 20 minutes to go over the cos_sim_sq…I don’t think I understand the implementation like I thought I did.

General Question Related to Performance Measures:

@vsavram @mecunha for your awareness.

msbutler commented 3 years ago

@jscuds

  1. What do you mean by "how many indices are enough"? Like how many observations should we sample? I would make that a parameter passed into self.grad_func_specs.

We don't need to iterate for each aux_function because ff.forward evaluates every aux_function all at once. The only loop in default_finite_differences iterates through the input dimensionality, but that functionality doesn't currently work.

I think all you need to add to default_finite_diff is something along the lines of:

if 'random' in self.grad_func_specs:
     random_indices = np.random.int(0,x.shape[1], grad_func_specs['random'] ) # where, grad_func_specs['random'] equals the number of indices we want to sample
     x = x[:,random_indices]
  1. All the aux_functions should be evaluated, regardless of which observations we sample. You shouldn't change similarity_score

  2. We currently don't have that functionality. @mcembalest @mecunha we should add functionality to spit out the MSE for ff, NLM, and LUNA. Ideally in the 24 hours.

msbutler commented 3 years ago

@jscuds awesome work with the finite diff demos. Some feedback:

Scalar Delta Demo

Proposed next steps:

Gonna review the sampling thing now.

msbutler commented 3 years ago

@jscuds

On the sampling analysis: again, this looks great. A few things to note: