ML Case Studies
Preface
Technical Setup
1
Reproducibility of scientific papers
1.1
How to measure reproducibility? Classification of problems with reproducing scientific papers
1.1.1
Abstract
1.1.2
Introduction
1.1.3
Related Work
1.1.4
Methodology
1.1.5
Results
1.1.6
Summary and conclusions
1.2
Aging articles. How time affects reproducibility of scientific papers?
1.2.1
Abstract
1.2.2
Introduction
1.2.3
CodeExtractoR package
1.2.4
Methodology
1.2.5
Results
1.2.6
Summary and conclusions
1.3
Ways to reproduce articles in terms of release date and magazine
1.3.1
Abstract
1.3.2
Methodology
1.3.3
Results
1.4
Reproducibility of outdated articles about up-to-date R packages
1.4.1
Abstract
1.4.2
Introduction and Motivation
1.4.3
Related Work
1.4.4
Methodology
1.4.5
Results
1.4.6
Summary and conclusions
1.5
Correlation between reproducibility of research papers and their objective
1.5.1
Abstract
1.5.2
Introduction and Motivation
1.5.3
Methodology
1.5.4
Results
1.5.5
Summary, conclusions and encouragement
1.6
How active development affects reproducibility
1.6.1
Abstract
1.6.2
Introduction and Motivation
1.6.3
Methodology
1.6.4
Results
1.6.5
Summary and conclusions
1.7
Reproducibility differences of articles published in various journals and using R or Python language
1.7.1
Abstract
1.7.2
Introduction and Motivation
1.7.3
Methodology
1.7.4
Results
1.7.5
Summary and conclusions
2
Imputation
2.1
Default imputation efficiency comparison
2.1.1
Abstract
2.1.2
Introduction and Motivation
2.1.3
Related Work
2.1.4
Methodology
2.1.5
Results
2.1.6
Summary and conclusions
2.2
The Hajada Imputation Test
2.2.1
Abstract
2.2.2
Introduction and Motivation
2.2.3
Methodology
2.2.4
Results
2.2.5
Summary
2.2.6
Conclusions
2.3
Comparison of performance of data imputation methods in the context of their impact on the prediction efficiency of classification algorithms
2.3.1
Abstract
2.3.2
Introduction and motivation
2.3.3
Methodology
2.3.4
Results
2.3.5
Summary and conclusions
2.4
Various data imputation techniques in R
2.4.1
Abstract
2.4.2
Introduction and motivation
2.4.3
Methodology
2.4.4
Results
2.4.5
Summary and conclusions
2.5
Imputation techniques’ comparison in R programming language
2.5.1
Abstract
2.5.2
Introduction & Motivation
2.5.3
Methodology
2.5.4
Results
2.5.5
Conclusions
2.6
How imputation techniques interact with machine learning algorithms
2.6.1
Abstract
2.6.2
Introduction and Motivation
2.6.3
Methodology
2.6.4
Results
2.6.5
Summary and conclusions
3
Interpretability
3.1
Building an explainable model for ordinal classification on Eucalyptus dataset. Meeting black box model performance levels.
3.1.1
Abstract
3.1.2
Introduction and Motivation
3.1.3
Related Work
3.1.4
Methodology
3.1.5
Results
3.1.6
Model explanantion
3.1.7
Summary and conclusions
3.1.8
References
3.2
Predicting code defects using interpretable static measures.
3.2.1
Abstract
3.2.2
Introduction and Motivation
3.2.3
Dataset
3.2.4
Methodology
3.2.5
Results
3.2.6
Summary and conclusions
3.3
Using interpretable Machine Learning models in the Higgs boson detection.
3.3.1
Abstract
3.3.2
Introduction and Motivation
3.3.3
Related Work
3.3.4
Methodology
3.3.5
Results
3.3.6
Summary and conclusions
3.4
Can Automated Regression beat linear model?
3.4.1
Abstract
3.4.2
Introduction and Motivation
3.4.3
Data
3.4.4
Methodology
3.4.5
Results
3.4.6
Summary and conclusions
3.5
Interpretable, non-linear feature engineering techniques for linear regression models - exploration on concrete compressive strength dataset with a new feature importance metric.
3.5.1
Abstract
3.5.2
Introduction and Related Works
3.5.3
Methodology
3.5.4
Results
3.5.5
Summary and conclusions
3.6
Surpassing black box model’s performance on unbalanced data with an interpretable one using advanced feature engineering
3.6.1
Abstract
3.6.2
Introduction and Motivation
3.6.3
Data
3.6.4
Methodology
3.6.5
Results
3.6.6
Final Results
3.6.7
Conclusions
3.7
Which Neighbours Affected House Prices in the ’90s?
3.7.1
Abstract
3.7.2
Introduction
3.7.3
Related Work
3.7.4
Data
3.7.5
Methodology
3.7.6
Results
3.7.7
Conclusions
3.8
Explainable Computer Vision with embedding and k-NN classifier
3.8.1
Abstract
3.8.2
Introduction
3.8.3
Methodology
3.8.4
Results
3.8.5
Discussion and Conclusion
3.8.6
Bibliography
4
Acknowledgements
References
Published with bookdown
ML Case Studies
Chapter 3
Interpretability
Interpretability