Posts with the tag XAI:

Can you build an explainable model outperforming black box?

A word about black boxes Nowadays a fierce competition can be observed – scientists are surpassing each other in creating better regression models. As those models are getting more complex, it is becoming almost impossible to illustrate results relation with data, in a way humans understand. They are commonly called ‘black boxes’. ‘Machine learning is frequently referred to as a black box—data goes in, decisions come out, but the processes between input and output are opaque’ ~ The Lancet. Despite their excellent performance, sometimes models with easily interpretable output can be more desired, e.g. in banking. What can be done? Results ready for further human analysis can be achieved with explainable models (linear models, decision trees, etc.

Black-box VS White-box Duel

Prepare to fight The interpretability of machine learning models is gaining more and more interest in the scientific world. It’s because artificial intelligence is used in a lot of business solutions that impact our everyday life. Knowledge how the model works can, among others, assure us about the safety of the implemented solution. We came across the article of students of the Warsaw University of Technology Wojciech Bogucki, Tomasz Makowski, Dominik Rafacz titled “Predicting code defects using interpretable static measures.” touching this topic. Black box vs white box Using interpretable models, such as linear regression, decision trees and k-nearest neighbors is one way to have your solution explainable.

Beat the Black Box!

Understanding things is good for your health There is no doubt we live in a world defined by data. In fact, we always were, only now we’ve got a wider variety of tools at our disposal to store and process all this information. We no longer need to search for structures in data by hand, we’ve got models and AI for this. However, we still want, or rather feel urge to, understand how all those analysis work. Especially when we’re talking about our health data, and that is what authors of “Can Automated Regression beat linear model?" are talking about.

Are black boxes inevitable?

Black vs white Machine learning seems to be all about creating a model with best performance - balancing well its variance and accuracy. Unfortunately, the pursuit of that balance makes us forget about the the fact, that - in the end - model will serve human beings. If that’s the case, a third factor should be considered - interpretability. When a model is unexplainable (AKA black-box model), it may be treated as untrustworthy and become useless. It is a problem, since many models known for its high performance (like XGBoost) happen to be parts of the black-box team. A false(?) trade-off So it would seem, that explainability is, and has to be, sacrificed for better performance of the model.