Metrics, Regularization, Optmization and Baysian statistics Q and A
🎯 I. Classification Metrics (Discrete Output) These metrics are essential when dealing with discrete class predictions. 1. Accuracy Definition: The ratio of correct predictions to the total number of predictions. Formula: ( TP + TN ) / Total Use Case: Ideal for balanced datasets where all classes have similar frequencies. Provides a simple, quick overview of model performance. Disadvantage: Highly misleading on imbalanced datasets. A model can achieve 95% accuracy by simply predicting the majority class, making it useless for the minority class. 2. Precision Definition: Out of all positive predictions, how many were actually correct. Formula: TP / ( TP + FP ) Use Case: When the cost of a False Positive (FP) is very high (e. g., spam detection—you don't want to flag a legitimate email as spam; or autonomous driving—you don't want to falsely identify a safe object as a threat). Advantage: Measures the quality of positive predictions. 3. Recall (Sensitivity) Def...