Closed HeDo88TH closed 1 year ago
Thanks, but before you go any further, can we please disable code linting? I can't review anything if you continue to change code formatting.
Also stop cleaning up stuff at random. If you want to cleanup the code, do so as part of a separate PR.
Thanks, but before you go any further, can we please disable code linting? I can't review anything if you continue to change code formatting.
Also stop cleaning up stuff at random. If you want to cleanup the code, do so as part of a separate PR.
The cmake file had indentation inconsistencies: half of the file was 2 spaces, the other half was 4, and some parts had none at all. I will deactivate the linter for the most perfluous things 👍
And revert the formatting/refactoring changes.
Stats explanation:
Accuracy
Accuracy is the proportion of correctly classified points (both true positives and true negatives) to the total number of points in the dataset. It is a common metric to evaluate the overall performance of a classification model. However, accuracy may not always be the best metric, especially when there's a class imbalance (one class has significantly more points than the other).
Accuracy = (True Positives + True Negatives) / (True Positives + True Negatives + False Positives + False Negatives)
Precision
Precision, also known as positive predictive value, measures the proportion of true positives (correctly classified points of a specific class) to the total number of points predicted as that class (both true positives and false positives). In other words, it quantifies how well the classifier correctly identifies a specific class without including false positives.
Precision = True Positives / (True Positives + False Positives)
Sensitivity
Sensitivity, also known as recall or true positive rate, measures the proportion of true positives to the total number of actual positives (both true positives and false negatives). It quantifies the ability of the classifier to find all the relevant points of a specific class.
Sensitivity = True Positives / (True Positives + False Negatives)
F1 Score
In point cloud classification, precision and sensitivity are often used together to provide a more comprehensive understanding of the classifier's performance. This is because they provide complementary information: precision is focused on reducing false positives, while sensitivity is focused on capturing all true positives. To balance these two metrics, you can use the F1 score, which is the harmonic mean of precision and sensitivity:
F1 Score = 2 * (Precision * Sensitivity) / (Precision + Sensitivity)
Not quite; just look at this diff https://github.com/uav4geo/OpenPointClass/pull/16/files
I can't make changes:
! [remote rejected] HeDo88TH/main -> HeDo88TH/main (permission denied)
error: failed to push some refs to 'https://github.com/HeDo88TH/OpenPointClass'
Modifications: https://github.com/pierotofy/OpenPointClass/tree/stats
Closing via #18
Added extended training statistics:
The
--stats-file
switch allows the user to specify where to write the statistics:I am still working on this 👍
Plus I added another switch
--eval-result
which points to a ply file where to write the validation result point cloud