7 challenges in AI projects

by | 22. July 2021 | Basics

Most companies have realized by now that artificial intelligence (AI) promises enormous potential for almost all industries and application areas. However, when it comes to planning and implementing AI projects, decision makers still face numerous challenges.

In this article we discuss seven common stumbling blocks in AI projects that need to be addressed in order to make the project a success for everyone involved.

1. A unified understanding of AI

There is no single, universal definition for AI – everyone associates different concepts and ideas with the term. Exactly this circumstance can become a challenge in the project. For everybody involved, including the specialist departments, clarification should take place in advance. It must be discussed exactly what AI is capable of and what it is not capable of, myths and misunderstandings must be dispelled. Any concerns and worries employees may have, such as fear of losing their jobs or of critical situations arising from AI predictions, should also be taken seriously. This is the only way to make sure everybody is on the same page.

2. The data: quantity and quality

Data is the treasure of a company and the basis for every AI project. In order to train models, very large amounts of data are needed – and these are not sufficiently available in every company. Another challenge besides quantity is the quality of the data. These must cover all possible constellations in terms of content so that the model is later equipped for the actual use case. Only if anomalies are present in the data sets, they can be taken into account in training and discovered later in the application. Furthermore, if the data sets are unbalanced in any way, this leads to an undesirable bias – the AI model then makes decisions that may not be optimal.

3. Expectations that are too high

Before every project, its costs and benefits are evaluated – and especially when it comes to new and complex technologies such as AI, there is a certain skepticism among decision makers. The added value of such a project must be apparent. In AI projects, however, there is no guarantee of success. Whether a neural network can be trained successfully depends on many factors, but above all on the data. But already the exploration of data and the training of the networks requires effort and costs, even if the process is not successful. Thus, every user takes a certain financial risk. Because of this, expectations should not be too high – there is a possibility that the project will fail.

4. Interpreting the results

As common as scenarios of a machine taking over the world are, it is unlikely in the foreseeable future that decisions will be made solely by an AI. Instead, technology will provide decision makers with a data foundation that can be used to make informed decisions. To do this, however, the results of the AI must be interpreted correctly. These are rarely black and white, but always have to be seen with a certain accuracy of prediction. This accuracy must be put in relation to the prediction of analyses by humans. After all, humans also make mistakes, for example when diagnosing diseases on the basis of X-ray images. Therefore, it is important to compare AI models and humans when it comes to prediction quality.

5. Explainability

In most cases, AI is a black box. Not even a Data Scientist can clearly explain why the AI made a certain decision. That’s why “Explainable AI” is a broad research field with the goal of making the technology more transparent to humans – which will reduce any reservations and concerns about AI in the future. So far, the basis for decision-making can be explained, at least to some degree. But until there is full transparency, one must accept that AI will make decisions whose solution path may not be comprehensible.

6. Skepticism

As with all new technologies, AI is also subject to a certain amount of skepticism. In addition to the factors already mentioned, the effort required to collect and label the training data often deters decision makers from introducing and using it – even if the use of AI means an increase in the effectiveness of certain processes in the long term. As a result, potentials in companies are not exploited and companies risk losing their competitiveness. In our article on AI 2.0. we explain why even non-tech companies should now look into AI and their own possible use cases so that Germany does not fall behind as an attractive location for AI.

7. Who owns the IP?

Before an AI project starts, the question of who owns the trained neural network and thus the intellectual property should be clarified in any case. The data is always the property of the creator, i.e., the respective department, but often the service provider would like to use the knowledge gained from the project elsewhere. When using AI functionalities of the major providers via services or APIs (such as Amazon’s Alexa), the customers often have to agree to the use of the data to improve the services. This aspect should therefore be explicitly regulated contractually to avoid discrepancies later on.

<a href="https://www.alexanderthamm.com/en/blog/author/at-redaktion/" target="_self">[at] REDAKTION</a>


Unsere AT Redaktion besteht aus verschiedenen Mitarbeitern, die mit größter Sorgfalt und nach Bestem Wissen und Gewissen die entsprechenden Blogartikel ausarbeiten. Unsere Experten aus dem jeweiligen Fachgebiet versorgen Sie regelmäßig mit aktuellen Beiträgen aus dem Data Science und AI Bereich. Wir wünschen viel Freude beim Lesen.



Data Navigator Newsletter