PDF Google Drive Downloader v1.1


Báo lỗi sự cố

Nội dung text M2S1 - Notes.pdf

M2S1 – NOTES Model Parameters vs Hyperparameters Model Hyperparameter • A model hyperparameter is the parameter whose value is set before the model start training. • Are settings that control the learning process of the model. • They cannot be learned by fitting the model to the data. Example: Hyperparameter Optimization • Is the process of selecting the optimal values for ML model’s hyperparameters. • Hyperparameter tuning methods like grid search or Bayesian optimization could be useful for fine-tuning these parameters. Model Hyperparameters in different ML models: K-Nearest Neighbors (KNN) • K Value: Determines the number of neighbors used for classification. • Distance Metric: Specifies how distances are calculated (e.g., Euclidean, Manhattan). In this plot, the x-axis represents the number of epochs and the y-axis represents the training loss. Here number of epochs is a hyperparameter and is set manually. Setting this number to a small value may cause underfitting and high value may cause overfitting.
Logistic Regression • Regularization Strength (C): Controls penalty for large coefficients. • Penalty Type: Chooses regularization method (e.g., L1 for Lasso, L2 for Ridge). Decision Trees & Random Forests • Maximum Depth: Limits the number of splits in decision trees. • Minimum Samples Split: Defines the minimum number of samples needed to split a node. • Number of Trees (for Random Forests): Determines how many trees are used in the ensemble. Support Vector Machines (SVM) • Kernel Type: Chooses how data is transformed (e.g., linear, polynomial, radial basis function (RBF)). • C Parameter: Controls the trade-off between maximizing margin and minimizing classification error. • Gamma: Affects the influence of individual data points in the decision boundary. Gradient Boosting (XGBoost, LightGBM, CatBoost) • Learning Rate: Controls how much to shrink contribution from each tree. • Number of Estimators: Sets the number of boosting iterations. • Maximum Depth: Restricts tree complexity. • Subsample: Defines the fraction of samples used for each boosting iteration. Neural Networks (Deep Learning) • Learning Rate: Controls how quickly the model updates weights (e.g., used in RMSProp, Adam). • Batch Size: Defines the number of samples per training batch. • Number of Layers: Determines the depth of the neural network. • Dropout Rate: Helps prevent overfitting by randomly deactivating neurons during training.

Tài liệu liên quan

x
Báo cáo lỗi download
Nội dung báo cáo



Chất lượng file Download bị lỗi:
Họ tên:
Email:
Bình luận
Trong quá trình tải gặp lỗi, sự cố,.. hoặc có thắc mắc gì vui lòng để lại bình luận dưới đây. Xin cảm ơn.