amparore / leaf

A Python framework for the quantitative evaluation of eXplainable AI methods
16 stars 8 forks source link

No hinge_loss in leaf_plot() #3

Closed singsinghai closed 1 year ago

singsinghai commented 1 year ago

Hi Amparore,

I can see you use hinge_loss for the get_lime_local_concordance and get_lime_prescriptivity functions. However, in the leaf_plot():

def leaf_plot(stability, method):
            fig, ax1 = plt.subplots(figsize=(6, 2.2))
            data = [ stability.flatten(),
                     1 - rows[method + '_local_discr'], 
                     rows[method + '_fidelity_f1'], 
                     # rows[method + '_prescriptivity_f1'],
                     # rows[method + '_bal_prescriptivity' ],
                     1 - 2 * np.abs(rows[method + '_boundary_discr' ]) ]

the hinge_loss is not applied. In addition, you did define another wb_prescriptivity by checking the accuracy/F1 score on the x1 (x on the boundary) and its neighborhood of the Whitebox model. Why did you change your mind and not continue to use this?

Thanks!

amparore commented 1 year ago

The hinge_loss is not used with the same meaning that is usually used for loss functions. For local_concordance/prescriptivity, the formula does not guarantee that the final value is in range [0..1]. By clipping it to 0, we remove some small noise below 0. We thought that this would have improved the compact visualization. Since this clip function that is best in this case, max(0, 1 - x), happens to be widely known as the hinge_loss, we kept the same name. Maybe we should have called it in another way, just to avoid confusion.

In principle, you could remove this clip, and results will (almost) remain the same, with some small noise in the negative plane.