Posts with the tag interpretability:

Can you build an explainable model outperforming black box?

A word about black boxes Nowadays a fierce competition can be observed – scientists are surpassing each other in creating better regression models. As those models are getting more complex, it is becoming almost impossible to illustrate results relation with data, in a way humans understand. They are commonly called ‘black boxes’. ‘Machine learning is frequently referred to as a black box—data goes in, decisions come out, but the processes between input and output are opaque’ ~ The Lancet. Despite their excellent performance, sometimes models with easily interpretable output can be more desired, e.g. in banking. What can be done? Results ready for further human analysis can be achieved with explainable models (linear models, decision trees, etc.

Black-box VS White-box Duel

Prepare to fight The interpretability of machine learning models is gaining more and more interest in the scientific world. It’s because artificial intelligence is used in a lot of business solutions that impact our everyday life. Knowledge how the model works can, among others, assure us about the safety of the implemented solution. We came across the article of students of the Warsaw University of Technology Wojciech Bogucki, Tomasz Makowski, Dominik Rafacz titled “Predicting code defects using interpretable static measures.” touching this topic. Black box vs white box Using interpretable models, such as linear regression, decision trees and k-nearest neighbors is one way to have your solution explainable.

Beat the Black Box!

Understanding things is good for your health There is no doubt we live in a world defined by data. In fact, we always were, only now we’ve got a wider variety of tools at our disposal to store and process all this information. We no longer need to search for structures in data by hand, we’ve got models and AI for this. However, we still want, or rather feel urge to, understand how all those analysis work. Especially when we’re talking about our health data, and that is what authors of “Can Automated Regression beat linear model?" are talking about.

Time flies... and so do articles' reproducibility?

Sometimes we forget about what is really important… Have you ever stopped for a moment in the course of everyday life and looked better at the reproductivity of the scientific article? Have you ever wondered - can I do what authors did in this article and get the same results? If you are not in technical industry - probably not… and it’s perfectly fine. Otherwise - it’s about time to start getting interested. Reproducibility, like wine? Or vice versa? When publishing a work, you definitely need to remember about few basic rules: correct documentation, ensuring that files are up-to-date etc. However, not everyone respects these rules and hence - some articles sooner or later lose readability, reproductivity and therefore their value.

Explainable Computer Vision

What is this blog entry about? Black-boxes are commonly used in computer vision. But do we have to use it? This article looks at this issue and we try to understand it with our small (but developed after one semester of machine learning experience) brains and summarize it here. What is this article about? Computer vision is cool. But it would be just as cool to understand how it works, and it’s not so obvious. Explainable methods of image recognition - which is de facto classification - cannot use logistic regression and decision trees, because every model loses transparency as its performance increases - not to mention understanding neural networks.