Tech Deep Dive

Why, AI? The basics of Explainable AI (XAI)

So in theory, with XAI methods and the right know-how, even complex black-box models can be at least partially interpreted. But how does it look in reality? In many projects or use cases, interpretability is an important pillar of the overall success. Security aspects, lack of trust in the model results, possible future regulations as well as ethical concerns ensure the need for interpretable ML models.

Trust in Autonomous Driving

How text explanations help build trust in autonomous systems To "drive" a car that steers itself, or to share the road with autonomous vehicles - what sounds like a dream come true for technology fanatics and visionaries, seems like a dream come true in the eyes of...

Feature Stores - An Overview

From time to time, a new software system appears on the horizon of the technology landscape. In this paper we look at such a new data access and processing layer: the feature store. It will play an important role in the construction of intelligent systems based on machine learning (ML) algorithms.