Cost Functions#
Cost functions, also known as loss functions or objective functions, play a crucial role in training artificial neural networks (ANNs). These functions measure the discrepancy between the network’s predictions and the actual target values during the training process. The choice of the appropriate cost function depends on the specific task at hand, such as regression or classification. Here’s a summary of some common cost functions for ANNs:
1. Mean Squared Error (MSE):
Used for regression tasks.
Measures the average squared difference between predicted values and actual target values.
Encourages the network to minimize the variance of its predictions.
Mathematically, MSE is defined as:
2. Mean Absolute Error (MAE):
Another cost function for regression.
Measures the average absolute difference between predicted values and actual target values.
Less sensitive to outliers compared to MSE.
Mathematically, MAE is defined as:
3. Cross-Entropy Loss (Log Loss):
Used for binary and multiclass classification tasks.
Measures the dissimilarity between predicted class probabilities and true class labels.
Commonly used with logistic and softmax activation functions in the output layer.
For binary classification:
For multiclass classification:
4. Hinge Loss (SVM Loss):
Used for support vector machine (SVM) and margin-based classification.
Encourages correct classification with a margin between classes.
Mathematically, hinge loss is defined as:
5. Huber Loss:
Used for regression tasks.
Combines the benefits of both MSE and MAE by having a quadratic loss for small errors and a linear loss for large errors.
Less sensitive to outliers than MSE.
Mathematically, Huber loss is defined as:
The choice of the cost function should align with the specific problem being solved, as different tasks (e.g., regression, binary classification, multiclass classification) require different loss functions.