Best Practice Process for Model Selection and Monitoring: How to Avoid Traps in the Validation and Backtesting Cycle
Choosing an unsuitable scoring function to measure the performance of models may lead to a biased model selection. The most common example for such an improper scoring rule is accuracy. This and other prevalent traps and answers to the following questions are addressed in the talk: How to select the best model? What are suitable scoring functions to measure the performance? How to gain control of overfitting? In this talk, Dr. Martin Dirrler presents a holistic approach consisting of an initial model selection step and a backtesting procedure suited to monitor whether the actual performance of the selected model differs significantly from the expected one.