...
Recall here representing how accurate our classifier was in correctly labelling an issue given all the times the issue actually had that label.
Label was actually on the issue | Label was not on the issue | |
---|---|---|
Label was predicted | Desired outcome | False Positive – A high precision value means that this is reduced |
Label was not predicted | False Negative - A high recall value means that this is reduced | Desired outcome |
F1 score balances both the precision and recall scores
Motivations/Conclusion:
We are able to see which labels the model can predict accurately for. Given a certain accuracy threshold, the bot has the potential to label an issue given that it surpasses this value. As a result, we would be able to accurately provide labels to new issues.
...