heidmic / suprb

GNU General Public License v3.0
6 stars 3 forks source link

Introduce Hyperellipsoidal conditions #109

Open heidmic opened 2 years ago

heidmic commented 2 years ago

with and without rotations

tniemeier commented 2 years ago

@RomanSraj could you create a new branch with the name "issue_109_introduce_hyperellipsoidal_conditions" for me? It seems that I can't create a Branch on my own.

--- Edit: fixed, got permission to do it

heidmic commented 2 years ago

I gave you the permission to do it directly. Otherwise, you could have forked and pushed from there.

Please follow the contribution guidelines and shorten the name to 109_introduce_hyperellipsoidal_conditions

tniemeier commented 2 years ago

I gave you the permission to do it directly. Otherwise, you could have forked and pushed from there.

Please follow the contribution guidelines and shorten the name to 109_introduce_hyperellipsoidal_conditions

done

tniemeier commented 2 years ago

Can someone help me to understand how the Rules calculate their experience? @RomanSraj @heidmic

I am trying to implement the gaussian_kernel_function but I am getting this Error: image

It pretty much means it was never fitted or has a experience of 0 - standing in this line of Code (suprb/optimizer/rule/es/es.py): image

But I don't really understand the code here in the first place. Why is it possible to reference .is_fitted and .experience on the lamba variable rule? Doesn't it need to be declared as a Rule first? Or where does it get the information that the lambda variable rule is part of the Class Rule?

And the other Question is: How rule.experience is calculated. The only position I find that it is set is (suprb/rule/base.py): image And I don't really understand how X.shape[0] is returning an experience value.

Can someone help me to understand the Code Base better? Because as long I don't understand how experience is calculated, I can't really debug why my Error is getting thrown.

Thank you in advance.

heidmic commented 2 years ago
  1. We can do this because Python isn't type-safe. we know that each element of children will be of class Rule and therefore will have both attributes. However, Python doesn't care. If we were to put an objective of class solution into children it would (most likely) throw an error message but there is no check if what we are doing here is actually legal (although we know it is because there is no way an element of children is not a Rule)
  2. X is a matrix containing data points (examples) as rows and features as columns. At this point in the code (of rule.fit) we already removed (https://github.com/heidmic/suprb/blob/42a6599e980c4fc2e1d8ca3fa85cb592d5a00aff/suprb/rule/base.py#L71) all elements of X from X that are not matched by this rule. All remaining rows will be used for training. And experience signifies how many data points this rule's model was fitted on / were actually used to determine the weights of the RidgeRegression of this rule. If experience of a rule is 0 no examples were matched and therefore none were available for training at this point. X.shape[0] gives us the number of rows of matrix X. The float is just here to make it of the same type as error which makes some down the line computations more straight forward

The most likely issue you are encountering is that no rules matched anything or at least did not fulfil the criteria defined by this optimizers acceptance function, e.g. https://github.com/heidmic/suprb/blob/42a6599e980c4fc2e1d8ca3fa85cb592d5a00aff/suprb/optimizer/rule/es/es.py#L59