Bias in AI decisions - causes and countermeasures

from | 8 March 2021 | Tech Deep Dive

AI is being used to automate more and more decisions. In many applications such as credit assessment, applicant screening and fraud detection, avoiding any kind of discrimination is crucial from both an ethical and legal perspective. The With its AI strategy, the Federal Government is giving away explicit attention to the issue of bias in AI decisions and discrimination in the context of the application of AI processes. The General Data Protection Regulation (GDPR) also states that in the case of automated decisions, discriminatory effects based on racial or ethnic origin, political opinion, religion or belief, trade union membership, genetic or health status or sexual orientation (confidential information) must be prevented.

Causes and effects

Decisions can be biased regardless of their source. However, if they are based on Algorithms are more comprehensible than decisions made by humans. This enables developers to make bias visible in AI decisions. Bias can be defined as a systematic, repeatable error in a decision-making system. It can lead to a degradation of decision quality or to unfair or discriminatory outcomes such as favouring a particular user group. Avoiding bias to improve model quality is well established in practice, but the ethical use of AI is an active research topic.

Bias can have several sources: the data selection, the target variable (label), the developers and the model itself.

With AI systems, there is no natural understanding of the objectivity of the data being processed. If there is a bias in the data, it is adopted by the model. Algorithms are written by humans, who are naturally biased. Furthermore, the method used can lead to bias, for example if it is unsuitable for application to the specific problem.

In the allocation of public housing, for example, bias could have the following causes:

Data selection
There used to be fewer single fathers, so their data is underrepresented. Their requests are therefore not processed correctly.

Labels
The allocation process has changed. Therefore, historical data does not represent the current allocation process. The model quality is decreasing.

Developer
The developer has no children and therefore does not sufficiently take into account information on family size.

Algorithm
The relationship between target and input variables is too complex for the model used (underfitting). This can be circumvented by using more complex models.

Dealing with bias in practice

Optimising fairness in models is often at odds with optimising model quality. Therefore, awareness of bias as well as a definition of fairness is an essential task in any project, as a model can be fair according to one definition and unfair according to another. Only then can the decision-making process be checked for bias and fairness.

Bias in AI decisions can be detected and corrected in data by analysing the data set. This includes outlier analysis, a change in dependencies in the data over time or simply the graphical representation of variables divided into appropriate groups, e.g. the target variable distribution for all genders.

In order to build a fair model, it is not sufficient to omit confidential information as input variables, as other influence variables may be stochastically dependent on confidential information.

A reference data set enables further analysis of fairness. The ideal reference dataset contains all model-relevant information and confidential information at the frequency expected for the outcome. By applying the model to this dataset, hidden bias can be made visible, e.g. discrimination against minorities, even if the ethnic background is not part of the model inputs.

There are specialised libraries (e.g.: AIF360, fairlearn), which were developed to calculate fairness measures and thereby detect bias in models. These assume that the dataset used contains the confidential information. They also provide methods to reduce bias in AI decisions.

Error analysis of the model results makes it possible to find data samples where the model has difficulties. This often helps to find underrepresented groups, e.g. by looking at samples where the model has very determinedly chosen the wrong class.

It is important to monitor the model throughout its lifecycle and allow users to understand the rationale behind model decisions. This works well with more complex models, using Explainable AI methods such as SHAP values, and it helps to uncover hidden biases

Real example: Discrimination by migration background through use of commuting time in hiring decisions (Source: hbr.org).

If the training data does not sufficiently match the real data, additional data can be collected and integrated into the model. If this is not possible, upsampling, data augmentation or downsampling can be used to achieve a better representation.

Not all bias is bad: By deliberately introducing a countermeasure, a known bias can be counteracted. For example, an underrepresented minority could receive bonus points in the automatic screening of applicants if a minimum representation quota is to be reached.

Conclusion

The analysis of bias , especially discrimination, is not only ethically and legally necessary when automated decisions affect people. In practice, it also often yields additional information that improves predictive quality, transparency and monitoring quality and thus the entire decision-making process, even if performance and fairness are theoretically contradictory goals. And finally, it also helps in court if things do get serious.

Author

[at] EDITORIAL

Our AT editorial team consists of various employees who prepare the corresponding blog articles with the greatest care and to the best of their knowledge and belief. Our experts from the respective fields regularly provide you with current contributions from the data science and AI sector. We hope you enjoy reading.

0 Kommentare

Submit a Comment

Your email address will not be published. Required fields are marked *