Blog  >  Article details

AI Algorithms in HR: How to Mobilize Them Ethically?

I- The Rise of Artificial Intelligence in Human Resources Management 

With the rapid evolution of Artificial Intelligence technologies, particularly in machine learning, new uses and practices are emerging in human resources management that were previously untapped. HR Tech solutions powered by AI, including TOP (The Augmented Manager), are emerging in a market that is discovering the capabilities these solutions bring. Simultaneously, the market is beginning to question the GDPR and ethical challenges associated with them. The North American market is well advanced, prompting French and European AI HR providers to accelerate and distinguish themselves. As most of them are data co-processors for clients, they need to play a role in this transition and establish tools to manage these challenges effectively (both in terms of GDPR and ethical processing). 

We are witnessing exponential growth not only in the AI market but also in HR analytics and the broader HRIS market. Data and its processing are at the heart of the evolution of HR professions. These benefits are as promising as they are challenging, emphasizing the need for simplified access to well-orchestrated and quality data to enable the best use of AI technologies through thoughtful consideration of ethics, GDPR, and functional interests. 

New actors with various specialties are emerging, such as Revolv, which assists major clients in recovering and processing HR data before providing them to consumer AI services. A network is forming between these new actors, and a new ecosystem is rapidly developing. Startup agility aligns well with the new needs of major clients who must quickly adopt these disruptive tools to maintain a competitive advantage in a dynamic global market. 

II- Algorithmic Biases Leading to New Forms of Discrimination 

AI algorithms enable public and private organizations to achieve significant goals in various sectors but can also generate discrimination against certain individuals. Discriminatory practices can arise at different stages of the AI production cycle, involving data, algorithms, and users. Historical biases, inherent in the socio-technical world characterized by power dynamics and social inequalities, are among the well-known biases. Biases can also occur during algorithm construction, such as evaluation biases and variable omission biases. AI biases are not solely attributable to algorithm builders and data scientists; users are equally involved (presentation bias, behavioral bias, etc.). 

In recruitment, where cognitive biases are numerous, algorithms help recruiters better target candidate profiles. However, despite good performance, statistical tests have shown that these algorithms, unaware of inclusive policies, tend to replicate human recruiters’ biases. 

Discrimination in algorithmic decisions can manifest subtly compared to traditional discriminatory practices by humans. Machines, lacking consciousness, cannot be driven by sexist or racist prejudices intentionally. However, discriminatory effects can occur when the AI system learns from human decisions, which may be biased. Indirect discrimination is more suitable for characterizing the discriminatory effects of AI systems, defined as apparently neutral practices that disadvantage certain individuals. AI has created new categories not correlated with protected characteristics but leading to differentiation of certain social groups, making its decisions challenging to justify. 

III- How to Make Artificial Intelligence More Equitable? 

To make AI more equitable, the field of AI ethics has emerged in recent years. AI ethics is a discipline studying the ethical issues of AI, bringing together various academic communities for this purpose. The Fair Machine Learning (FairML) sub-discipline focuses on developing technical instruments (metrics and algorithms) to concretely combat discriminatory biases. Many initiatives have appeared to regulate the design, deployment, and governance of AI, making equality an unavoidable issue, evident in numerous documents produced by public and private organizations to combat algorithmic discrimination. 

It is essential to distinguish AI ethics from legal regulations that exist or are emerging to compellingly regulate the development and deployment of AI systems. The AI Act project is crucial as it aims to impose obligations on AI system providers and users to ensure compliance with fundamental rights legislation throughout the AI system’s life cycle. While AI ethics can handle equity, the legal aspect should not be ignored because principles of equality and non-discrimination, long established in France and Europe, should apply, even if adapted to the specificities of algorithmic biases. Current legal tools to reduce the risks of algorithmic discrimination include anti-discrimination law and data protection regulations. 

Similar to the GDPR, which profoundly changed how companies handle data, the future AI Act of the European Union promises to disrupt certain practices. Instead of passively awaiting new regulations to ensure AI systems do not violate fundamental rights, some companies are taking proactive measures to make their algorithms “ethical.” Several means are available for companies to make their AI systems as equitable as possible. Awareness of biases throughout the algorithm’s life cycle is the first step, considering that humans are not perfectly rational beings, and machines cannot reason. To avoid discriminatory biases, attention is required during data collection and cleaning phases. Choices made by data scientists during modeling are decisive (creation of variable categories). Once the algorithm is built, evaluating its performance alone is insufficient. Statistical measurement of biases can be done to ensure fairness. Due to its ability to grasp correlations, AI can discriminate against people based on their characteristics, even if forbidden variables are not used. Methodically testing potential biases related to gender or ethnic origin quantitatively is essential, comparing algorithmic decisions on an aggregated scale using demographic parity indicators to pronounce on equity. Identified biases can be corrected using debiasing metrics, with researchers proposing various techniques to mitigate biases through reweighting or imposing fairness constraints. 

IV- Case Study: TOP – AI-Powered Turnover Prediction and Prevention Solution 

Among the most important points highlighted by the TOP team, developing a turnover prediction (resignation) and manager reinforcement tool, here are the mechanisms: 

  1. Avoid the use of prohibited or “too sensitive” data within algorithms or those of providers (e.g., gender, age, and other sensitive data according to CNIL). 
  1. Judiciously adhere to the GDPR regulations established and followed by CNIL. 
  1. “Debias” algorithmic logic as much as possible, recognizing that data itself is often biased due to human activities. 
  1. Supervise algorithmic processes monthly to avoid overtreatment of certain job categories or profiles. TOP’s team has created a dashboard to allow any user (manager, HR business partner, or HR manager) to monitor this treatment and make informed decisions. 
  1. Maintain human decision-making (in all cases) or add an internal control team (e.g., HR team vs. Managers) to use technology as a counseling and support tool for actions. 
  1. Follow the legal framework established by the European Union: AI Act. 

As we are still in the early stages, the release of Chat GPT-3 has drawn attention to the new technological capabilities brought by AI. A complete transformation of our ways of working and collaborating is underway and will be positive through the mobilization of such precautions. Adherence to ethics, CNIL opinions, GDPR standards, and sensible use of artificial intelligence will provide the necessary answers to HR professionals facing current challenges through technological advances. 

As a concluding anecdote, ChatGPT’s response to the question of how to debias an AI system in HR includes using machine learning algorithms to recognize and correct biases, employing analytical tools to identify biases and their sources, revising recruitment and training policies for fair opportunities, ensuring performance evaluation systems are objective and unbiased, implementing audit processes to ensure HR systems and policies are bias-free, and integrating diversity and inclusion practices into all aspects of HR policy. 

Other news​