tomnisim / ScooterGotcha

Hazard detection system for scooter riders
1 stars 1 forks source link

Test - test plan #110

Open amitmosk opened 1 year ago

amitmosk commented 1 year ago

RP - Acceptance Test

amitmosk commented 1 year ago

Some ideas:

Integration:

  1. without camera & gps
  2. with camera
  3. with gps
  4. client & server
  5. rp & server

Acceptance:

  1. day, night, rain.
  2. each city
  3. different attitude in road
  4. offline, online
  5. broken camera, broken gps

Unit:

  1. RP - override files, detect hazards, running time??
  2. server
  3. admin
  4. rider

Load:

  1. register
  2. login
amitmosk commented 1 year ago

Accuracy: The percentage of correctly classified instances out of the total number of instances. This is a simple and straightforward measure of how well your model is performing.

Precision: The proportion of true positives (correctly identified instances) out of the total number of instances predicted as positive. Precision is useful when you want to avoid false positives (instances that are incorrectly classified as positive).

Recall: The proportion of true positives out of the total number of positive instances. Recall is useful when you want to avoid false negatives (instances that are incorrectly classified as negative).

F1-score: A weighted average of precision and recall that takes both measures into account. F1-score is a good overall measure of a model's performance.

ROC Curve: Receiver Operating Characteristic (ROC) curve is a plot that displays the performance of a binary classifier system as the discrimination threshold is varied.

AUC: Area Under the ROC Curve (AUC) is a single number that represents the overall performance of the model, by computing the area under the ROC curve.

Confusion Matrix: A table that shows the number of true positives, false positives, true negatives, and false negatives for a classification model.