So having monitored the activity of the labelbot (manually) as much as I could
I have tried to go through all the issues in the month of Jan 2019, and found out that in the following issues - LabelBot has recommended a set of labels which are either
a. Incorrect
b. Incomplete
Metric used here - Edit distance
Basically number of delete, append needed to go from predict to groud-truth
Assumption - Ground-truth is correct. ( i.e. Labels given by community are correct)
I feel there is a lot of common trend here and hence was wondering if the errors are getting repeated, does this mean the bot isn't learning as expected?
So having monitored the activity of the labelbot (manually) as much as I could I have tried to go through all the issues in the month of Jan 2019, and found out that in the following issues - LabelBot has recommended a set of labels which are either a. Incorrect b. Incomplete
Metric used here - Edit distance Basically number of delete, append needed to go from predict to groud-truth
Assumption - Ground-truth is correct. ( i.e. Labels given by community are correct)
I feel there is a lot of common trend here and hence was wondering if the errors are getting repeated, does this mean the bot isn't learning as expected?