Introduction In the previous articles, we introduced the basics of reinforcement learning (RL) and explored its various applications in business. To delve deeper into the inner workings of reinforcement learning algorithms, we...
Tech Deep Dive
Reinforcement Learning - Algorithms in the Brain
Introduction Usually our blog articles are focused on business use cases and the analysis of business data. But in this article we want to take a different approach. We discuss how methods we use to solve business problems can help us...
Reinforcement Learning from Human Feedback (RLHF) in the field of large language models
Recent machine learning models, such as those used in ChatGPT, have caused a sensation due to their amazing results. In general, large language models (LLMs) have an increasingly strong influence on the...
MLOps Platform - Building, Scaling and Operationalising
In the previous post, we saw how important it is to get involved with MLOps early on. MLOps platforms help to reduce manual steps, increase team collaboration, meet regulatory and compliance requirements, and...
Reinforcement Learning Use Cases for Business Applications
Introduction Artificial intelligence (AI) is revolutionising many facets of our daily lives and is a key driver of current developments in business and industry. Today, AI-powered algorithms solve tedious problems and use cases on...
Reinforcement Learning Example and Framework
With this article we want to bridge the gap between the basic understanding of reinforcement learning (RL) and solving a problem with RL methods. The article is divided into three sections. The first section is a short introduction to RL....
Explainable AI - Methods for explaining AI models
Explainable AI (XAI) is currently one of the most discussed topics in the field of AI. Developments such as the European Union's AI Act, which makes explainability a mandatory property of AI models in critical domains, have brought XAI into the focus of many companies developing or applying AI models. As this now applies to a large part of the economy, the demand for methods that can describe the decisions of models in a comprehensible way has increased immensely.
Why, AI? The basics of Explainable AI (XAI)
So in theory, with XAI methods and the right know-how, even complex black-box models can be at least partially interpreted. But how does it look in reality? In many projects or use cases, interpretability is an important pillar of the overall success. Security aspects, lack of trust in the model results, possible future regulations as well as ethical concerns ensure the need for interpretable ML models.
With Automized Machine Learning (Auto ML) on the rise – is there still a need for human data scientists?
In short, the answer is yes. Automated Machine Learning (Auto ML) does not make Data Scientists redundant. Rather, it is a useful tool that increases their productivity. The greatest benefit comes when data scientists use Auto ML tools to...
How to deal with missing values
Methods of imputation and when to use them Why missing values should interest us In our work as Data Scientists we often work with time series data and applications for forecasts (=forecasting). As with all "classic"...