Tech Deep Dive

Deadly Triad in Reinforcement Learning

Introduction In the previous articles, we introduced the basics of reinforcement learning (RL) and explored its various applications in business. To delve deeper into the inner workings of reinforcement learning algorithms, we...

Reinforcement Learning Example and Framework

With this article we want to bridge the gap between the basic understanding of reinforcement learning (RL) and solving a problem with RL methods. The article is divided into three sections. The first section is a short introduction to RL....

Explainable AI - Methods for explaining AI models

Explainable AI (XAI) is currently one of the most discussed topics in the field of AI. Developments such as the European Union's AI Act, which makes explainability a mandatory property of AI models in critical domains, have brought XAI into the focus of many companies developing or applying AI models. As this now applies to a large part of the economy, the demand for methods that can describe the decisions of models in a comprehensible way has increased immensely.

Why, AI? The basics of Explainable AI (XAI)

So in theory, with XAI methods and the right know-how, even complex black-box models can be at least partially interpreted. But how does it look in reality? In many projects or use cases, interpretability is an important pillar of the overall success. Security aspects, lack of trust in the model results, possible future regulations as well as ethical concerns ensure the need for interpretable ML models.

How to deal with missing values

Methods of imputation and when to use them Why missing values should interest us In our work as Data Scientists we often work with time series data and applications for forecasts (=forecasting). As with all "classic"...