mikeizbicki / deep-tda

17 stars 4 forks source link

For next week #9

Open mikeizbicki opened 5 years ago

mikeizbicki commented 5 years ago

Goals for next week are:

  1. Type out the results for the covering number

  2. Create examples for the doubling number/dimension, try to prove results

  3. Fix the computations to repeat multiple batches on the gpu / round of the tda program

  4. Combine Theorem 6.8 (from understanding machine learning theory and practice) and the results from this paper to explain why VC dimension doesn't explain the generalization error of deep neural networks. Lemma 4.2 and definition 4.3 (from understanding...) may help you.

ranyishere commented 5 years ago

For #3, we want to get a single layer, for all the datapoints, and then generate the diagram for that layer. We can delete the data after the diagram is generated.

ranyishere commented 5 years ago

Note: We might not be able to put in all the datapoints into the ripser diagram. If that's the case, we will have to come back and think about how to merge the data together.

mathemonads commented 5 years ago
  1. Done
  2. Proved one result. Constructed solution for the other result; still need to write up.
  3. We are in fact able to fit a single layer of dataset of 5000 into Ripser. https://github.com/mikeizbicki/deep-tda/issues/10 documents progress expanding to the whole dataset.
  4. Done